On Sat, 16 Jun 2007, Ingo Molnar wrote:
* malc <[EMAIL PROTECTED]> wrote:
Interesting, the idle time accounting (done from
account_system_time()) has not changed. Has your .config changed?
Could you please send it across. I've downloaded apc and I am trying
to reproduce your problem.
http:/
* malc <[EMAIL PROTECTED]> wrote:
> > Interesting, the idle time accounting (done from
> > account_system_time()) has not changed. Has your .config changed?
> > Could you please send it across. I've downloaded apc and I am trying
> > to reproduce your problem.
>
> http://www.boblycat.org/~mal
On Sat, 16 Jun 2007, Balbir Singh wrote:
malc wrote:
On Fri, 15 Jun 2007, Balbir Singh wrote:
malc wrote:
On Thu, 14 Jun 2007, Ingo Molnar wrote:
[..snip..]
Now integral load matches the one obtained via the "accurate" method.
However the report for individual cores are of by around 20
malc wrote:
> On Fri, 15 Jun 2007, Balbir Singh wrote:
>
>> malc wrote:
>>> On Thu, 14 Jun 2007, Ingo Molnar wrote:
>>>
>
> [..snip..]
>
>>>
>>> Now integral load matches the one obtained via the "accurate" method.
>>> However the report for individual cores are of by around 20% percent.
>>>
>>
On Fri, 15 Jun 2007, Balbir Singh wrote:
malc wrote:
On Thu, 14 Jun 2007, Ingo Molnar wrote:
[..snip..]
Now integral load matches the one obtained via the "accurate" method.
However the report for individual cores are of by around 20% percent.
I think I missed some of the context, is t
malc wrote:
> On Thu, 14 Jun 2007, Ingo Molnar wrote:
>
>>
>> * malc <[EMAIL PROTECTED]> wrote:
>>
the alternating balancing might be due to an uneven number of tasks
perhaps? If you have 3 tasks on 2 cores then there's no other
solution to achieve even performance of each task but
On Thu, 14 Jun 2007, Ingo Molnar wrote:
* malc <[EMAIL PROTECTED]> wrote:
the alternating balancing might be due to an uneven number of tasks
perhaps? If you have 3 tasks on 2 cores then there's no other
solution to achieve even performance of each task but to rotate them
amongst the cores.
* malc <[EMAIL PROTECTED]> wrote:
> > the alternating balancing might be due to an uneven number of tasks
> > perhaps? If you have 3 tasks on 2 cores then there's no other
> > solution to achieve even performance of each task but to rotate them
> > amongst the cores.
>
> One task, one thread.
On Thu, 14 Jun 2007, Ingo Molnar wrote:
* Vassili Karpov <[EMAIL PROTECTED]> wrote:
Hello Ingo and others,
After reading http://lwn.net/Articles/236485/ and noticing few
refernces to accounting i decided to give CFS a try. With
sched-cfs-v2.6.21.4-16 i get pretty weird results, it seems like
* Vassili Karpov <[EMAIL PROTECTED]> wrote:
> Hello Ingo and others,
>
> After reading http://lwn.net/Articles/236485/ and noticing few
> refernces to accounting i decided to give CFS a try. With
> sched-cfs-v2.6.21.4-16 i get pretty weird results, it seems like
> scheduler is dead set on try
Hello Ingo and others,
After reading http://lwn.net/Articles/236485/ and noticing few refernces
to accounting i decided to give CFS a try. With sched-cfs-v2.6.21.4-16
i get pretty weird results, it seems like scheduler is dead set on trying
to move the processes to different CPUs/cores all the ti
* malc <[EMAIL PROTECTED]> wrote:
> This situation is harder to write a hog-like testcase for. Anyhow it
> seems the difference in percentage stems from the `intr' field of
> `/proc/stat', which fits. And following patch (which should be applied
> on top of yours) seems to help. I wouldn't rea
On Mon, 26 Mar 2007, Con Kolivas wrote:
On Monday 26 March 2007 09:01, Con Kolivas wrote:
On Monday 26 March 2007 03:14, malc wrote:
On Mon, 26 Mar 2007, Con Kolivas wrote:
On Monday 26 March 2007 01:19, malc wrote:
Erm... i just looked at the code and suddenly it stopped making any sense
at
On Monday 26 March 2007 15:11, Al Boldi wrote:
> Con Kolivas wrote:
> > Ok this one is heavily tested. Please try it when you find the time.
>
> It's better, but still skewed. Try two chew.c's; they account 80% each.
>
> > ---
> > Currently we only do cpu accounting to userspace based on what is
>
On Mon, 2007-03-26 at 08:11 +0300, Al Boldi wrote:
> > + /* Sanity check. It should never go backwards or ruin accounting
> > */ + if (unlikely(now < p->last_ran))
> > + goto out_set;
>
> If sched_clock() goes backwards, why not fix it, instead of hacking around
> it?
Con Kolivas wrote:
>
> Ok this one is heavily tested. Please try it when you find the time.
It's better, but still skewed. Try two chew.c's; they account 80% each.
> ---
> Currently we only do cpu accounting to userspace based on what is
> actually happening precisely on each tick. The accuracy
On Monday 26 March 2007 09:01, Con Kolivas wrote:
> On Monday 26 March 2007 03:14, malc wrote:
> > On Mon, 26 Mar 2007, Con Kolivas wrote:
> > > On Monday 26 March 2007 01:19, malc wrote:
> > Erm... i just looked at the code and suddenly it stopped making any sense
> > at all:
> >
> > p->l
On Monday 26 March 2007 03:14, malc wrote:
> On Mon, 26 Mar 2007, Con Kolivas wrote:
> > On Monday 26 March 2007 01:19, malc wrote:
> >> On Mon, 26 Mar 2007, Con Kolivas wrote:
> >>> So before we go any further with this patch, can you try the following
> >>> one and see if this simple sanity check
On Mon, 26 Mar 2007, Con Kolivas wrote:
On Monday 26 March 2007 01:19, malc wrote:
On Mon, 26 Mar 2007, Con Kolivas wrote:
So before we go any further with this patch, can you try the following
one and see if this simple sanity check is enough?
Sure (compiling the kernel now), too bad old ax
On Monday 26 March 2007 01:19, malc wrote:
> On Mon, 26 Mar 2007, Con Kolivas wrote:
> > So before we go any further with this patch, can you try the following
> > one and see if this simple sanity check is enough?
>
> Sure (compiling the kernel now), too bad old axiom that testing can not
> confir
On Mon, 26 Mar 2007, Con Kolivas wrote:
On Monday 26 March 2007 00:57, malc wrote:
On Mon, 26 Mar 2007, Con Kolivas wrote:
On Sunday 25 March 2007 23:06, malc wrote:
On Sun, 25 Mar 2007, Con Kolivas wrote:
On Sunday 25 March 2007 21:46, Con Kolivas wrote:
On Sunday 25 March 2007 21:34, malc
On Monday 26 March 2007 00:57, malc wrote:
> On Mon, 26 Mar 2007, Con Kolivas wrote:
> > On Sunday 25 March 2007 23:06, malc wrote:
> >> On Sun, 25 Mar 2007, Con Kolivas wrote:
> >>> On Sunday 25 March 2007 21:46, Con Kolivas wrote:
> On Sunday 25 March 2007 21:34, malc wrote:
> > On Sun,
On Mon, 26 Mar 2007, Con Kolivas wrote:
On Sunday 25 March 2007 23:06, malc wrote:
On Sun, 25 Mar 2007, Con Kolivas wrote:
On Sunday 25 March 2007 21:46, Con Kolivas wrote:
On Sunday 25 March 2007 21:34, malc wrote:
On Sun, 25 Mar 2007, Ingo Molnar wrote:
* Con Kolivas <[EMAIL PROTECTED]> w
On Sunday 25 March 2007 23:06, malc wrote:
> On Sun, 25 Mar 2007, Con Kolivas wrote:
> > On Sunday 25 March 2007 21:46, Con Kolivas wrote:
> >> On Sunday 25 March 2007 21:34, malc wrote:
> >>> On Sun, 25 Mar 2007, Ingo Molnar wrote:
> * Con Kolivas <[EMAIL PROTECTED]> wrote:
> > For an rsd
On Sunday 25 March 2007, Con Kolivas wrote:
>On Sunday 25 March 2007 22:32, Gene Heskett wrote:
>> On Sunday 25 March 2007, Con Kolivas wrote:
>> >On Sunday 25 March 2007 21:46, Con Kolivas wrote:
>> >> On Sunday 25 March 2007 21:34, malc wrote:
>> >> > On Sun, 25 Mar 2007, Ingo Molnar wrote:
>> >>
On Sun, 25 Mar 2007, Con Kolivas wrote:
On Sunday 25 March 2007 21:46, Con Kolivas wrote:
On Sunday 25 March 2007 21:34, malc wrote:
On Sun, 25 Mar 2007, Ingo Molnar wrote:
* Con Kolivas <[EMAIL PROTECTED]> wrote:
For an rsdl 0.33 patched kernel. Comments? Overhead worth it?
[..snip..]
On Sun, 25 Mar 2007, Con Kolivas wrote:
On Sunday 25 March 2007 21:46, Con Kolivas wrote:
On Sunday 25 March 2007 21:34, malc wrote:
On Sun, 25 Mar 2007, Ingo Molnar wrote:
* Con Kolivas <[EMAIL PROTECTED]> wrote:
For an rsdl 0.33 patched kernel. Comments? Overhead worth it?
we want to do
On Sunday 25 March 2007 22:32, Gene Heskett wrote:
> On Sunday 25 March 2007, Con Kolivas wrote:
> >On Sunday 25 March 2007 21:46, Con Kolivas wrote:
> >> On Sunday 25 March 2007 21:34, malc wrote:
> >> > On Sun, 25 Mar 2007, Ingo Molnar wrote:
> >> > > * Con Kolivas <[EMAIL PROTECTED]> wrote:
> >>
On Sunday 25 March 2007, Con Kolivas wrote:
>On Sunday 25 March 2007 21:46, Con Kolivas wrote:
>> On Sunday 25 March 2007 21:34, malc wrote:
>> > On Sun, 25 Mar 2007, Ingo Molnar wrote:
>> > > * Con Kolivas <[EMAIL PROTECTED]> wrote:
>> > >> For an rsdl 0.33 patched kernel. Comments? Overhead worth
On Sunday 25 March 2007 21:46, Con Kolivas wrote:
> On Sunday 25 March 2007 21:34, malc wrote:
> > On Sun, 25 Mar 2007, Ingo Molnar wrote:
> > > * Con Kolivas <[EMAIL PROTECTED]> wrote:
> > >> For an rsdl 0.33 patched kernel. Comments? Overhead worth it?
> > >
> > > we want to do this - and we shou
On Sunday 25 March 2007 21:34, malc wrote:
> On Sun, 25 Mar 2007, Ingo Molnar wrote:
> > * Con Kolivas <[EMAIL PROTECTED]> wrote:
> >> For an rsdl 0.33 patched kernel. Comments? Overhead worth it?
> >
> > we want to do this - and we should do this to the vanilla scheduler
> > first and check the re
On Sun, 25 Mar 2007, Ingo Molnar wrote:
* Con Kolivas <[EMAIL PROTECTED]> wrote:
For an rsdl 0.33 patched kernel. Comments? Overhead worth it?
we want to do this - and we should do this to the vanilla scheduler
first and check the results. I've back-merged the patch to before RSDL
and have
* Con Kolivas <[EMAIL PROTECTED]> wrote:
> > +/*
> > + * Some helpers for converting nanosecond timing to jiffy resolution
> > + */
> > +#define NS_TO_JIFFIES(TIME) ((TIME) / (10 / HZ))
> > +#define JIFFIES_TO_NS(TIME) ((TIME) * (10 / HZ))
> > +
>
> This hunk is already in ma
On Sunday 25 March 2007 17:51, Ingo Molnar wrote:
> * Con Kolivas <[EMAIL PROTECTED]> wrote:
> > For an rsdl 0.33 patched kernel. Comments? Overhead worth it?
>
> we want to do this - and we should do this to the vanilla scheduler
> first and check the results. I've back-merged the patch to before
34 matches
Mail list logo