* Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> > [...] The timeslices of tasks (i.e. the time they spend on a CPU
> > without scheduling away) is _not_ maintained directly in CFS as a
> > per-task variable that can be "cleared", it's not the metric that
> > drives scheduling. Yes, of course CF
Ingo Molnar wrote:
* Jarek Poplawski <[EMAIL PROTECTED]> wrote:
[...] (Btw, in -rc8-mm2 I see new sched_slice() function which seems
to return... time.)
wrong again. That is a function, not a variable to be cleared.
It still gives us a target time, so could we not simply have sched_y
On Mon, 2007-10-01 at 09:49 -0700, David Schwartz wrote:
> > * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> >
> > > BTW, it looks like risky to criticise sched_yield too much: some
> > > people can misinterpret such discussions and stop using this at all,
> > > even where it's right.
>
> > Really,
On Wed, Oct 03, 2007 at 12:55:34PM +0200, Dmitry Adamushko wrote:
...
> just a quick patch, not tested and I've not evaluated all possible
> implications yet.
> But someone might give it a try with his/(her -- are even more
> welcomed :-) favourite sched_yield() load.
Of course, after some evaluat
David Schwartz wrote:
* Jarek Poplawski <[EMAIL PROTECTED]> wrote:
BTW, it looks like risky to criticise sched_yield too much: some
people can misinterpret such discussions and stop using this at all,
even where it's right.
Really, i have never seen a _single_ mainstream app w
* Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
> + se->vruntime += delta_exec_weighted;
thanks Dmitry.
Btw., this is quite similar to the yield_granularity patch i did
originally, just less flexible. It turned out that apps want either zero
granularity or "infinite" granu
On Wed, Oct 03, 2007 at 12:58:26PM +0200, Dmitry Adamushko wrote:
> On 03/10/2007, Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
> > On 03/10/2007, Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> > > I can't see anything about clearing. I think, this was about charging,
> > > which should change the key
On 03/10/2007, Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
> On 03/10/2007, Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> > I can't see anything about clearing. I think, this was about charging,
> > which should change the key enough, to move a task to, maybe, a better
> > place in a que (tree) than
On 03/10/2007, Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> I can't see anything about clearing. I think, this was about charging,
> which should change the key enough, to move a task to, maybe, a better
> place in a que (tree) than with current ways.
just a quick patch, not tested and I've not ev
On Wed, Oct 03, 2007 at 11:10:58AM +0200, Ingo Molnar wrote:
>
> * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
>
> > On Wed, Oct 03, 2007 at 10:16:13AM +0200, Ingo Molnar wrote:
> > >
> > > * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> > >
> > > > > firstly, there's no notion of "timeslices" in
* Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> On Wed, Oct 03, 2007 at 10:16:13AM +0200, Ingo Molnar wrote:
> >
> > * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> >
> > > > firstly, there's no notion of "timeslices" in CFS. (in CFS tasks
> > > > "earn" a right to the CPU, and that "right" is n
On Wed, Oct 03, 2007 at 10:16:13AM +0200, Ingo Molnar wrote:
>
> * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
>
> > > firstly, there's no notion of "timeslices" in CFS. (in CFS tasks
> > > "earn" a right to the CPU, and that "right" is not sliced in the
> > > traditional sense) But we tried a c
* Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> > firstly, there's no notion of "timeslices" in CFS. (in CFS tasks
> > "earn" a right to the CPU, and that "right" is not sliced in the
> > traditional sense) But we tried a conceptually similar thing [...]
>
> >From kernel/sched_fair.c:
>
> "/*
On 02-10-2007 08:06, Ingo Molnar wrote:
> * David Schwartz <[EMAIL PROTECTED]> wrote:
...
>> I'm not familiar enough with CFS' internals to help much on the
>> implementation, but there may be some simple compromise yield that
>> might work well enough. How about simply acting as if the task used
On 02-10-2007 17:37, David Schwartz wrote:
...
> So now I not only have to come up with an example where sched_yield is the
> best practical choice, I have to come up with one where sched_yield is the
> best conceivable choice? Didn't we start out by agreeing these are very rare
> cases? Why are we
This is a combined response to Arjan's:
> that's also what trylock is for... as well as spinaphores...
> (you can argue that futexes should be more intelligent and do
> spinaphore stuff etc... and I can buy that, lets improve them in the
> kernel by any means. But userspace yield() isn't the answ
On Tue, Oct 02, 2007 at 11:03:46AM +0200, Jarek Poplawski wrote:
...
> should suffice. Currently, I wonder if simply charging (with a key
> recalculated) such a task for all the time it could've used isn't one
> of such methods. It seems, it's functionally analogous with going to
> the end of que o
On Mon, Oct 01, 2007 at 10:43:56AM +0200, Jarek Poplawski wrote:
...
> etc., if we know (after testing) eg. average expedition time of such
No new theory - it's only my reverse Polish translation. Should be:
"etc., if we know (after testing) eg. average dispatch time of such".
Sorry,
Jarek P.
-
T
On Mon, Oct 01, 2007 at 06:25:07PM +0200, Ingo Molnar wrote:
>
> * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
>
> > BTW, it looks like risky to criticise sched_yield too much: some
> > people can misinterpret such discussions and stop using this at all,
> > even where it's right.
>
> Really, i
Ingo Molnar <[EMAIL PROTECTED]> writes:
> * David Schwartz <[EMAIL PROTECTED]> wrote:
>
> > > These are generic statements, but i'm _really_ interested in the
> > > specifics. Real, specific code that i can look at. The typical Linux
> > > distro consists of in execess of 500 millions of lines
* David Schwartz <[EMAIL PROTECTED]> wrote:
> > at a quick glance this seems broken too - but if you show the
> > specific code i might be able to point out the breakage in detail.
> > (One underlying problem here appears to be fairness: a quick
> > unlock/lock sequence may starve out other th
* David Schwartz <[EMAIL PROTECTED]> wrote:
> > (user-space spinlocks are broken beyond words for anything but
> > perhaps SCHED_FIFO tasks.)
>
> User-space spinlocks are broken so spinlocks can only be implemented
> in kernel-space? Even if you use the kernel to schedule/unschedule the
> tas
* David Schwartz <[EMAIL PROTECTED]> wrote:
> > These are generic statements, but i'm _really_ interested in the
> > specifics. Real, specific code that i can look at. The typical Linux
> > distro consists of in execess of 500 millions of lines of code, in
> > tens of thousands of apps, so the
On Mon, 1 Oct 2007 15:44:09 -0700
"David Schwartz" <[EMAIL PROTECTED]> wrote:
>
> > yielding IS blocking. Just with indeterminate fuzzyness added to
> > it
>
> Yielding is sort of blocking, but the difference is that yielding
> will not idle the CPU while blocking might.
not really; SOMEON
> yielding IS blocking. Just with indeterminate fuzzyness added to it
Yielding is sort of blocking, but the difference is that yielding will not
idle the CPU while blocking might. Yielding is sometimes preferable to
blocking in a case where the thread knows it can make forward progress even
i
On Mon, 1 Oct 2007 15:17:52 -0700
"David Schwartz" <[EMAIL PROTECTED]> wrote:
>
> Arjan van de Ven wrote:
>
> > > It can occasionally be an optimization. You may have a case where
> > > you can do something very efficiently if a lock is not held, but
> > > you cannot afford to wait for the lock
Arjan van de Ven wrote:
> > It can occasionally be an optimization. You may have a case where you
> > can do something very efficiently if a lock is not held, but you
> > cannot afford to wait for the lock to be released. So you check the
> > lock, if it's held, you yield and then check again. If
Ingo Molnar wrote:
>
> Really, i have never seen a _single_ mainstream app where the use of
> sched_yield() was the right choice.
Pliant 'FastSem' semaphore implementation (as oppsed to 'Sem') uses 'yield'
http://old.fullpliant.org/
Basically, if the ressource you are protecting with the semaphor
On Mon, 1 Oct 2007 09:49:35 -0700
"David Schwartz" <[EMAIL PROTECTED]> wrote:
>
> > * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> >
> > > BTW, it looks like risky to criticise sched_yield too much: some
> > > people can misinterpret such discussions and stop using this at
> > > all, even where i
> These are generic statements, but i'm _really_ interested in the
> specifics. Real, specific code that i can look at. The typical Linux
> distro consists of in execess of 500 millions of lines of code, in tens
> of thousands of apps, so there really must be some good, valid and
> "right" use of
Ingo Molnar wrote:
* Chris Friesen <[EMAIL PROTECTED]> wrote:
However, there are closed-source and/or frozen-source apps where it's
not practical to rewrite or rebuild the app. Does it make sense to
break the behaviour of all of these?
See the background and answers to that in:
http:/
* David Schwartz <[EMAIL PROTECTED]> wrote:
> > > BTW, it looks like risky to criticise sched_yield too much: some
> > > people can misinterpret such discussions and stop using this at
> > > all, even where it's right.
>
> > Really, i have never seen a _single_ mainstream app where the use of
* Chris Friesen <[EMAIL PROTECTED]> wrote:
> Ingo Molnar wrote:
>
> >But, because you assert it that it's risky to "criticise sched_yield()
> >too much", you sure must know at least one real example where it's right
> >to use it (and cite the line and code where it's used, with
> >specificity
Ingo Molnar wrote:
But, because you assert it that it's risky to "criticise sched_yield()
too much", you sure must know at least one real example where it's right
to use it (and cite the line and code where it's used, with
specificity)?
It's fine to criticise sched_yield(). I agree that new
> * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
>
> > BTW, it looks like risky to criticise sched_yield too much: some
> > people can misinterpret such discussions and stop using this at all,
> > even where it's right.
> Really, i have never seen a _single_ mainstream app where the use of
> sched_
* Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> BTW, it looks like risky to criticise sched_yield too much: some
> people can misinterpret such discussions and stop using this at all,
> even where it's right.
Really, i have never seen a _single_ mainstream app where the use of
sched_yield() wa
On Fri, Sep 28, 2007 at 04:10:00PM +1000, Nick Piggin wrote:
> On Friday 28 September 2007 00:42, Jarek Poplawski wrote:
> > On Thu, Sep 27, 2007 at 03:31:23PM +0200, Ingo Molnar wrote:
> > > * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> >
> > ...
> >
> > > > OK, but let's forget about fixing iper
On Friday 28 September 2007 00:42, Jarek Poplawski wrote:
> On Thu, Sep 27, 2007 at 03:31:23PM +0200, Ingo Molnar wrote:
> > * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
>
> ...
>
> > > OK, but let's forget about fixing iperf. Probably I got this wrong,
> > > but I've thought this "bad" iperf patch
On Thu, Sep 27, 2007 at 03:31:23PM +0200, Ingo Molnar wrote:
>
> * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
...
> > OK, but let's forget about fixing iperf. Probably I got this wrong,
> > but I've thought this "bad" iperf patch was tested on a few nixes and
> > linux was the most different one
* Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> On Thu, Sep 27, 2007 at 11:46:03AM +0200, Ingo Molnar wrote:
[...]
> > What you missed is that there is no such thing as "predictable yield
> > behavior" for anything but SCHED_FIFO/RR tasks (for which tasks CFS does
> > keep the behavior). Please
On Thu, Sep 27, 2007 at 11:46:03AM +0200, Ingo Molnar wrote:
>
> * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
>
> > > the (small) patch below fixes the iperf locking bug and removes the
> > > yield() use. There are numerous immediate benefits of this patch:
> > ...
> > >
> > > sched_yield() is
* Ingo Molnar <[EMAIL PROTECTED]> [2007-09-27 12:56]:
> i'm curious by how much does CPU go down, and what's the output of
> iperf? (does it saturate full 100mbit network bandwidth)
I get about 94-95 Mbits/sec and CPU drops from 99% to about 82% (this
is with a 600 MHz ARM CPU).
--
Martin Michlma
* Martin Michlmayr <[EMAIL PROTECTED]> wrote:
> * Ingo Molnar <[EMAIL PROTECTED]> [2007-09-27 11:49]:
> > Martin, could you check the iperf patch below instead of the yield
> > patch - does it solve the iperf performance problem equally well,
> > and does CPU utilization drop for you too?
>
> Ye
* Ingo Molnar <[EMAIL PROTECTED]> [2007-09-27 11:49]:
> Martin, could you check the iperf patch below instead of the yield
> patch - does it solve the iperf performance problem equally well,
> and does CPU utilization drop for you too?
Yes, it works and CPU goes down too.
--
Martin Michlmayr
http
* Martin Michlmayr <[EMAIL PROTECTED]> wrote:
> > I think the real fix would be for iperf to use blocking network IO
> > though, or maybe to use a POSIX mutex or POSIX semaphores.
>
> So it's definitely not a bug in the kernel, only in iperf?
>
> (CCing Stephen Hemminger who wrote the iperf pa
* Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> > the (small) patch below fixes the iperf locking bug and removes the
> > yield() use. There are numerous immediate benefits of this patch:
> ...
> >
> > sched_yield() is almost always the symptom of broken locking or other
> > bug. In that sense
On 26-09-2007 15:31, Ingo Molnar wrote:
> * David Schwartz <[EMAIL PROTECTED]> wrote:
>
I think the real fix would be for iperf to use blocking network IO
though, or maybe to use a POSIX mutex or POSIX semaphores.
>>> So it's definitely not a bug in the kernel, only in iperf?
>> Martin:
Here is the combined fixes from iperf-users list.
Begin forwarded message:
Date: Thu, 30 Aug 2007 15:55:22 -0400
From: "Andrew Gallatin" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Subject: [PATCH] performance fixes for non-linux
Hi,
I've attached a patch which gives iperf similar performance to
On Wed, 26 Sep 2007 15:31:38 +0200
Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> * David Schwartz <[EMAIL PROTECTED]> wrote:
>
> > > > I think the real fix would be for iperf to use blocking network
> > > > IO though, or maybe to use a POSIX mutex or POSIX semaphores.
> > >
> > > So it's definitely
* David Schwartz <[EMAIL PROTECTED]> wrote:
> > > I think the real fix would be for iperf to use blocking network IO
> > > though, or maybe to use a POSIX mutex or POSIX semaphores.
> >
> > So it's definitely not a bug in the kernel, only in iperf?
>
> Martin:
>
> Actually, in this case I thin
> > I think the real fix would be for iperf to use blocking network IO
> > though, or maybe to use a POSIX mutex or POSIX semaphores.
>
> So it's definitely not a bug in the kernel, only in iperf?
Martin:
Actually, in this case I think iperf is doing the right thing (though not
the best thing) a
* Ingo Molnar <[EMAIL PROTECTED]> [2007-09-26 13:21]:
> > > I noticed on the iperf website a patch which contains sched_yield().
> > > http://dast.nlanr.net/Projects/Iperf2.0/patch-iperf-linux-2.6.21.txt
>
> great! Could you try this too:
>echo 1 > /proc/sys/kernel/sched_compat_yield
>
> does
* Martin Michlmayr <[EMAIL PROTECTED]> wrote:
> * Mike Galbraith <[EMAIL PROTECTED]> [2007-09-26 12:23]:
> > I noticed on the iperf website a patch which contains sched_yield().
> > http://dast.nlanr.net/Projects/Iperf2.0/patch-iperf-linux-2.6.21.txt
> >
> > Do you have that patch applied by any
* Mike Galbraith <[EMAIL PROTECTED]> [2007-09-26 12:23]:
> I noticed on the iperf website a patch which contains sched_yield().
> http://dast.nlanr.net/Projects/Iperf2.0/patch-iperf-linux-2.6.21.txt
>
> Do you have that patch applied by any chance? If so, it might be a
> worth while to try it wit
On Wed, 2007-09-26 at 10:52 +0200, Martin Michlmayr wrote:
> I noticed that my network performance has gone down from 2.6.22
> >from [ 3] 0.0-10.0 sec113 MBytes 95.0 Mbits/sec
> to [ 3] 0.0-10.0 sec 75.7 MBytes 63.3 Mbits/sec
> with 2.6.23-rc1 (and 2.6.23-rc8), as measured with ip
On Wed, 2007-09-26 at 10:52 +0200, Martin Michlmayr wrote:
> I noticed that my network performance has gone down from 2.6.22
> >from [ 3] 0.0-10.0 sec113 MBytes 95.0 Mbits/sec
> to [ 3] 0.0-10.0 sec 75.7 MBytes 63.3 Mbits/sec
> with 2.6.23-rc1 (and 2.6.23-rc8), as measured with ip
* Martin Michlmayr <[EMAIL PROTECTED]> wrote:
> * Ingo Molnar <[EMAIL PROTECTED]> [2007-09-26 11:47]:
> > > this will gather a good deal of info about the workload in question.
> > > Please send me the resulting debug file.
> > Another thing: please also do the same with the vanilla v2.6.22 kern
* Ingo Molnar <[EMAIL PROTECTED]> [2007-09-26 11:47]:
> > this will gather a good deal of info about the workload in question.
> > Please send me the resulting debug file.
> Another thing: please also do the same with the vanilla v2.6.22 kernel,
> and send me that file too. (so that the two cases
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> > What kind of information can I supply so you can track this down?
>
> as a starter, could you boot the sched-devel.git kernel, with
> CONFIG_SCHED_DEBUG=y and CONFIG_SCHEDSTATS=y enabled and could you run
> this script while the iperf test is in the
* Martin Michlmayr <[EMAIL PROTECTED]> wrote:
> I noticed that my network performance has gone down from 2.6.22
> from [ 3] 0.0-10.0 sec113 MBytes 95.0 Mbits/sec
> to [ 3] 0.0-10.0 sec 75.7 MBytes 63.3 Mbits/sec
> with 2.6.23-rc1 (and 2.6.23-rc8), as measured with iperf.
>
> I
60 matches
Mail list logo