Denis Vlasenko wrote:
This is what I would expect if run on an otherwise idle machine.
sched_yield just puts you at the back of the line for runnable
processes, it doesn't magically cause you to go to sleep somehow.
When a kernel build is occurring??? Plus `top` itself It damn
well
On Tuesday 23 August 2005 14:17, linux-os \(Dick Johnson\) wrote:
>
> On Mon, 22 Aug 2005, Robert Hancock wrote:
>
> > linux-os (Dick Johnson) wrote:
> >> I reported thet sched_yield() wasn't working (at least as expected)
> >> back in March of 2004.
> >>
> >>for(;;)
> >>
On Mon, 22 Aug 2005, Robert Hancock wrote:
> linux-os (Dick Johnson) wrote:
>> I reported thet sched_yield() wasn't working (at least as expected)
>> back in March of 2004.
>>
>> for(;;)
>> sched_yield();
>>
>> ... takes 100% CPU time as reported by `top`. It
On Mon, 22 Aug 2005, Robert Hancock wrote:
linux-os (Dick Johnson) wrote:
I reported thet sched_yield() wasn't working (at least as expected)
back in March of 2004.
for(;;)
sched_yield();
... takes 100% CPU time as reported by `top`. It should take
On Tuesday 23 August 2005 14:17, linux-os \(Dick Johnson\) wrote:
On Mon, 22 Aug 2005, Robert Hancock wrote:
linux-os (Dick Johnson) wrote:
I reported thet sched_yield() wasn't working (at least as expected)
back in March of 2004.
for(;;)
Denis Vlasenko wrote:
This is what I would expect if run on an otherwise idle machine.
sched_yield just puts you at the back of the line for runnable
processes, it doesn't magically cause you to go to sleep somehow.
When a kernel build is occurring??? Plus `top` itself It damn
well
Florian Weimer wrote:
* Howard Chu:
> That's not the complete story. BerkeleyDB provides a
> db_env_set_func_yield() hook to tell it what yield function it
> should use when its internal locking routines need such a function.
> If you don't set a specific hook, it just uses sleep(). The
>
Florian Weimer wrote:
* Andi Kleen:
Has anybody contacted the Sleepycat people with a description of the
problem yet?
Berkeley DB does not call sched_yield, but OpenLDAP does in some
wrapper code around the Berkeley DB backend.
That's not the complete story. BerkeleyDB provides a
Nikita Danilov wrote:
Howard Chu writes:
> That's beside the point. Folks are making an assertion that
> sched_yield() is meaningless; this example demonstrates that there are
> cases where sched_yield() is essential.
It is not essential, it is non-portable.
Code you described is based on
On Sat, 20 Aug 2005, Robert Hancock wrote:
> Howard Chu wrote:
>> I'll note that we removed a number of the yield calls (that were in
>> OpenLDAP 2.2) for the 2.3 release, because I found that they were
>> redundant and causing unnecessary delays. My own test system is running
>> on a Linux
* Howard Chu:
>>> Has anybody contacted the Sleepycat people with a description of
>>> the problem yet?
>> Berkeley DB does not call sched_yield, but OpenLDAP does in some
>> wrapper code around the Berkeley DB backend.
> That's not the complete story. BerkeleyDB provides a
>
> processes (PTHREAD_SCOPE_SYSTEM). The previous comment about slapd only
> needing to yield within a single process is inaccurate; since we allow
> slapcat to run concurrently with slapd (to allow hot backups) we need
> BerkeleyDB's locking/yield functions to work in System scope.
That's
linux-os (Dick Johnson) wrote:
I reported thet sched_yield() wasn't working (at least as expected)
back in March of 2004.
for(;;)
sched_yield();
... takes 100% CPU time as reported by `top`. It should take
practically 0. Somebody said that this was because
Andi Kleen wrote:
> processes (PTHREAD_SCOPE_SYSTEM). The previous comment about slapd
> only needing to yield within a single process is inaccurate; since
> we allow slapcat to run concurrently with slapd (to allow hot
> backups) we need BerkeleyDB's locking/yield functions to work in
> System
Andi Kleen wrote:
processes (PTHREAD_SCOPE_SYSTEM). The previous comment about slapd
only needing to yield within a single process is inaccurate; since
we allow slapcat to run concurrently with slapd (to allow hot
backups) we need BerkeleyDB's locking/yield functions to work in
System
linux-os (Dick Johnson) wrote:
I reported thet sched_yield() wasn't working (at least as expected)
back in March of 2004.
for(;;)
sched_yield();
... takes 100% CPU time as reported by `top`. It should take
practically 0. Somebody said that this was because
* Howard Chu:
Has anybody contacted the Sleepycat people with a description of
the problem yet?
Berkeley DB does not call sched_yield, but OpenLDAP does in some
wrapper code around the Berkeley DB backend.
That's not the complete story. BerkeleyDB provides a
db_env_set_func_yield() hook
processes (PTHREAD_SCOPE_SYSTEM). The previous comment about slapd only
needing to yield within a single process is inaccurate; since we allow
slapcat to run concurrently with slapd (to allow hot backups) we need
BerkeleyDB's locking/yield functions to work in System scope.
That's broken
On Sat, 20 Aug 2005, Robert Hancock wrote:
Howard Chu wrote:
I'll note that we removed a number of the yield calls (that were in
OpenLDAP 2.2) for the 2.3 release, because I found that they were
redundant and causing unnecessary delays. My own test system is running
on a Linux 2.6.12.3
Nikita Danilov wrote:
Howard Chu writes:
That's beside the point. Folks are making an assertion that
sched_yield() is meaningless; this example demonstrates that there are
cases where sched_yield() is essential.
It is not essential, it is non-portable.
Code you described is based on
Florian Weimer wrote:
* Andi Kleen:
Has anybody contacted the Sleepycat people with a description of the
problem yet?
Berkeley DB does not call sched_yield, but OpenLDAP does in some
wrapper code around the Berkeley DB backend.
That's not the complete story. BerkeleyDB provides a
Florian Weimer wrote:
* Howard Chu:
That's not the complete story. BerkeleyDB provides a
db_env_set_func_yield() hook to tell it what yield function it
should use when its internal locking routines need such a function.
If you don't set a specific hook, it just uses sleep(). The
OpenLDAP
* Andi Kleen:
> Has anybody contacted the Sleepycat people with a description of the
> problem yet?
Berkeley DB does not call sched_yield, but OpenLDAP does in some
wrapper code around the Berkeley DB backend.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body
Howard Chu writes:
> Lee Revell wrote:
> > On Sat, 2005-08-20 at 11:38 -0700, Howard Chu wrote:
> > > But I also found that I needed to add a new yield(), to work around
> > > yet another unexpected issue on this system - we have a number of
> > > threads waiting on a condition variable, and
Howard Chu writes:
Lee Revell wrote:
On Sat, 2005-08-20 at 11:38 -0700, Howard Chu wrote:
But I also found that I needed to add a new yield(), to work around
yet another unexpected issue on this system - we have a number of
threads waiting on a condition variable, and the thread
* Andi Kleen:
Has anybody contacted the Sleepycat people with a description of the
problem yet?
Berkeley DB does not call sched_yield, but OpenLDAP does in some
wrapper code around the Berkeley DB backend.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of
Howard Chu wrote:
I'll note that we removed a number of the yield calls (that were in
OpenLDAP 2.2) for the 2.3 release, because I found that they were
redundant and causing unnecessary delays. My own test system is running
on a Linux 2.6.12.3 kernel (installed over a SuSE 9.2 x86_64 distro),
Howard Chu wrote:
Lee Revell wrote:
On Sat, 2005-08-20 at 11:38 -0700, Howard Chu wrote:
> But I also found that I needed to add a new yield(), to work around
> yet another unexpected issue on this system - we have a number of
> threads waiting on a condition variable, and the thread holding
Howard Chu writes:
> Nikita Danilov wrote:
> > That returns us to the core of the problem: sched_yield() is used to
> > implement a synchronization primitive and non-portable assumptions are
> > made about its behavior: SUS defines that after sched_yield() thread
> > ceases to run on the CPU
On Sat, 2005-08-20 at 11:38 -0700, Howard Chu wrote:
> Nick Piggin wrote:
> > Robert Hancock wrote:
> > > I fail to see how sched_yield is going to be very helpful in this
> > > situation. Since that call can sleep from a range of time ranging
> > > from zero to a long time, it's going to give
Lee Revell wrote:
On Sat, 2005-08-20 at 11:38 -0700, Howard Chu wrote:
> But I also found that I needed to add a new yield(), to work around
> yet another unexpected issue on this system - we have a number of
> threads waiting on a condition variable, and the thread holding the
> mutex signals
On Sat, 2005-08-20 at 11:38 -0700, Howard Chu wrote:
> But I also found that I needed to add a new
> yield(), to work around yet another unexpected issue on this system -
> we have a number of threads waiting on a condition variable, and the
> thread holding the mutex signals the var, unlocks the
Nikita Danilov wrote:
That returns us to the core of the problem: sched_yield() is used to
implement a synchronization primitive and non-portable assumptions are
made about its behavior: SUS defines that after sched_yield() thread
ceases to run on the CPU "until it again becomes the head of its
Nick Piggin wrote:
Robert Hancock wrote:
> I fail to see how sched_yield is going to be very helpful in this
> situation. Since that call can sleep from a range of time ranging
> from zero to a long time, it's going to give unpredictable results.
Well, not sleep technically, but yield the
Howard Chu <[EMAIL PROTECTED]> writes:
> In this specific example, we use whatever
> BerkeleyDB provides and we're certainly not about to write our own
> transactional embedded database engine just for this.
BerkeleyDB is free software after all that comes with source code.
Surely it can be
Howard Chu writes:
> Nikita Danilov wrote:
[...]
>
> > What prevents transaction monitor from using, say, condition
> > variables to "yield cpu"? That would have an additional advantage of
> > blocking thread precisely until specific event occurs, instead of
> > blocking for some
Howard Chu writes:
Nikita Danilov wrote:
[...]
What prevents transaction monitor from using, say, condition
variables to yield cpu? That would have an additional advantage of
blocking thread precisely until specific event occurs, instead of
blocking for some vague
Howard Chu [EMAIL PROTECTED] writes:
In this specific example, we use whatever
BerkeleyDB provides and we're certainly not about to write our own
transactional embedded database engine just for this.
BerkeleyDB is free software after all that comes with source code.
Surely it can be fixed
Nick Piggin wrote:
Robert Hancock wrote:
I fail to see how sched_yield is going to be very helpful in this
situation. Since that call can sleep from a range of time ranging
from zero to a long time, it's going to give unpredictable results.
Well, not sleep technically, but yield the CPU
Nikita Danilov wrote:
That returns us to the core of the problem: sched_yield() is used to
implement a synchronization primitive and non-portable assumptions are
made about its behavior: SUS defines that after sched_yield() thread
ceases to run on the CPU until it again becomes the head of its
On Sat, 2005-08-20 at 11:38 -0700, Howard Chu wrote:
But I also found that I needed to add a new
yield(), to work around yet another unexpected issue on this system -
we have a number of threads waiting on a condition variable, and the
thread holding the mutex signals the var, unlocks the
Lee Revell wrote:
On Sat, 2005-08-20 at 11:38 -0700, Howard Chu wrote:
But I also found that I needed to add a new yield(), to work around
yet another unexpected issue on this system - we have a number of
threads waiting on a condition variable, and the thread holding the
mutex signals the
On Sat, 2005-08-20 at 11:38 -0700, Howard Chu wrote:
Nick Piggin wrote:
Robert Hancock wrote:
I fail to see how sched_yield is going to be very helpful in this
situation. Since that call can sleep from a range of time ranging
from zero to a long time, it's going to give unpredictable
Howard Chu writes:
Nikita Danilov wrote:
That returns us to the core of the problem: sched_yield() is used to
implement a synchronization primitive and non-portable assumptions are
made about its behavior: SUS defines that after sched_yield() thread
ceases to run on the CPU until it
Howard Chu wrote:
Lee Revell wrote:
On Sat, 2005-08-20 at 11:38 -0700, Howard Chu wrote:
But I also found that I needed to add a new yield(), to work around
yet another unexpected issue on this system - we have a number of
threads waiting on a condition variable, and the thread holding the
Howard Chu wrote:
I'll note that we removed a number of the yield calls (that were in
OpenLDAP 2.2) for the 2.3 release, because I found that they were
redundant and causing unnecessary delays. My own test system is running
on a Linux 2.6.12.3 kernel (installed over a SuSE 9.2 x86_64 distro),
Robert Hancock wrote:
I fail to see how sched_yield is going to be very helpful in this
situation. Since that call can sleep from a range of time ranging from
zero to a long time, it's going to give unpredictable results.
Well, not sleep technically, but yield the CPU for some undefined
Howard Chu wrote:
You assume that spinlocks are the only reason a developer may want to
yield the processor. This assumption is unfounded. Case in point - the
primary backend in OpenLDAP uses a transactional database with
page-level locking of its data structures to provide high levels of
Nikita Danilov wrote:
Howard Chu <[EMAIL PROTECTED]> writes:
> concurrency. It is the nature of such a system to encounter
> deadlocks over the normal course of operations. When a deadlock is
> detected, some thread must be chosen (by one of a variety of
> algorithms) to abort its transaction,
Chris Wedgwood wrote:
On Thu, Aug 18, 2005 at 11:03:45PM -0700, Howard Chu wrote:
> If the 2.6 kernel makes this programming model unreasonably slow,
> then quite simply this kernel is not viable as a database platform.
Pretty much everyone else manages to make it work.
And this
Howard Chu <[EMAIL PROTECTED]> writes:
[...]
> concurrency. It is the nature of such a system to encounter deadlocks
> over the normal course of operations. When a deadlock is detected, some
> thread must be chosen (by one of a variety of algorithms) to abort its
> transaction, in order to allow
On Thu, Aug 18, 2005 at 11:03:45PM -0700, Howard Chu wrote:
> If the 2.6 kernel makes this programming model unreasonably slow,
> then quite simply this kernel is not viable as a database platform.
Pretty much everyone else manages to make it work.
-
To unsubscribe from this list: send the line
Hi Howard,
Thanks for joining the discussion. One request, if I may,
can you retain the CC list on posts please?
Howard Chu wrote:
>
AFAIKS, sched_yield should only really be used by realtime
applications that know exactly what they're doing.
pthread_yield() was deleted from the POSIX
Hi Howard,
Thanks for joining the discussion. One request, if I may,
can you retain the CC list on posts please?
Howard Chu wrote:
AFAIKS, sched_yield should only really be used by realtime
applications that know exactly what they're doing.
pthread_yield() was deleted from the POSIX
On Thu, Aug 18, 2005 at 11:03:45PM -0700, Howard Chu wrote:
If the 2.6 kernel makes this programming model unreasonably slow,
then quite simply this kernel is not viable as a database platform.
Pretty much everyone else manages to make it work.
-
To unsubscribe from this list: send the line
Howard Chu [EMAIL PROTECTED] writes:
[...]
concurrency. It is the nature of such a system to encounter deadlocks
over the normal course of operations. When a deadlock is detected, some
thread must be chosen (by one of a variety of algorithms) to abort its
transaction, in order to allow other
Chris Wedgwood wrote:
On Thu, Aug 18, 2005 at 11:03:45PM -0700, Howard Chu wrote:
If the 2.6 kernel makes this programming model unreasonably slow,
then quite simply this kernel is not viable as a database platform.
Pretty much everyone else manages to make it work.
And this contributes
Nikita Danilov wrote:
Howard Chu [EMAIL PROTECTED] writes:
concurrency. It is the nature of such a system to encounter
deadlocks over the normal course of operations. When a deadlock is
detected, some thread must be chosen (by one of a variety of
algorithms) to abort its transaction, in
Howard Chu wrote:
You assume that spinlocks are the only reason a developer may want to
yield the processor. This assumption is unfounded. Case in point - the
primary backend in OpenLDAP uses a transactional database with
page-level locking of its data structures to provide high levels of
Robert Hancock wrote:
I fail to see how sched_yield is going to be very helpful in this
situation. Since that call can sleep from a range of time ranging from
zero to a long time, it's going to give unpredictable results.
Well, not sleep technically, but yield the CPU for some undefined
Andi Kleen wrote:
> Bernardo Innocenti <[EMAIL PROTECTED]> writes:
>
> It's really more a feature than a bug that it breaks so easily
> because they should be really using futexes instead, which
> have much better behaviour than any sched_yield ever could
> (they will directly wake up another
Bernardo Innocenti <[EMAIL PROTECTED]> writes:
It's really more a feature than a bug that it breaks so easily
because they should be really using futexes instead, which
have much better behaviour than any sched_yield ever could
(they will directly wake up another process waiting for the
lock and
Nick Piggin wrote:
> We class the SCHED_OTHER policy as having a single priority, which
> I believe is allowed (and even makes good sense, because dynamic
> and even nice priorities aren't really well defined).
>
> That also makes our sched_yield() behaviour correct.
>
> AFAIKS, sched_yield
Hello Con,
Thursday, August 18, 2005, 2:47:25 AM, you wrote:
> sched_yield behaviour changed in 2.5 series more than 3 years ago and
> applications that use this as a locking primitive should be updated.
I remember open office had a problem with excessive use of sched_yield()
during 2.5. I guess
Hello Con,
Thursday, August 18, 2005, 2:47:25 AM, you wrote:
sched_yield behaviour changed in 2.5 series more than 3 years ago and
applications that use this as a locking primitive should be updated.
I remember open office had a problem with excessive use of sched_yield()
during 2.5. I guess
Bernardo Innocenti [EMAIL PROTECTED] writes:
It's really more a feature than a bug that it breaks so easily
because they should be really using futexes instead, which
have much better behaviour than any sched_yield ever could
(they will directly wake up another process waiting for the
lock and
Nick Piggin wrote:
We class the SCHED_OTHER policy as having a single priority, which
I believe is allowed (and even makes good sense, because dynamic
and even nice priorities aren't really well defined).
That also makes our sched_yield() behaviour correct.
AFAIKS, sched_yield should
Andi Kleen wrote:
Bernardo Innocenti [EMAIL PROTECTED] writes:
It's really more a feature than a bug that it breaks so easily
because they should be really using futexes instead, which
have much better behaviour than any sched_yield ever could
(they will directly wake up another process
Joseph Fannin wrote:
On Thu, Aug 18, 2005 at 02:50:16AM +0200, Bernardo Innocenti wrote:
The relative timestamp reveals that slapd is spending 50ms
after yielding. Meanwhile, GCC is probably being scheduled
for a whole quantum.
Reading the man-page of sched_yield() it seems this isn't
the
Joseph Fannin wrote:
>The behavior of sched_yield changed for 2.6. I suppose the man
> page didn't get updated.
Now I remember reading about that on LWN or maybe KernelTraffic.
Thanks!
>>I also think OpenLDAP is wrong. First, it should be calling
>>pthread_yield() because slapd is a
On Thu, Aug 18, 2005 at 02:50:16AM +0200, Bernardo Innocenti wrote:
> The relative timestamp reveals that slapd is spending 50ms
> after yielding. Meanwhile, GCC is probably being scheduled
> for a whole quantum.
>
> Reading the man-page of sched_yield() it seems this isn't
> the correct
On Thu, 18 Aug 2005 10:50 am, Bernardo Innocenti wrote:
> Hello,
>
> I've been investigating a performance problem on a
> server using OpenLDAP 2.2.26 for nss resolution and
> running kernel 2.6.12.
>
> When a CPU bound process such as GCC is running in the
> background (even at nice 10), many
Hello,
I've been investigating a performance problem on a
server using OpenLDAP 2.2.26 for nss resolution and
running kernel 2.6.12.
When a CPU bound process such as GCC is running in the
background (even at nice 10), many trivial commands such
as "su" or "groups" become extremely slow and take
Hello,
I've been investigating a performance problem on a
server using OpenLDAP 2.2.26 for nss resolution and
running kernel 2.6.12.
When a CPU bound process such as GCC is running in the
background (even at nice 10), many trivial commands such
as su or groups become extremely slow and take a
On Thu, 18 Aug 2005 10:50 am, Bernardo Innocenti wrote:
Hello,
I've been investigating a performance problem on a
server using OpenLDAP 2.2.26 for nss resolution and
running kernel 2.6.12.
When a CPU bound process such as GCC is running in the
background (even at nice 10), many trivial
On Thu, Aug 18, 2005 at 02:50:16AM +0200, Bernardo Innocenti wrote:
The relative timestamp reveals that slapd is spending 50ms
after yielding. Meanwhile, GCC is probably being scheduled
for a whole quantum.
Reading the man-page of sched_yield() it seems this isn't
the correct behavior:
Joseph Fannin wrote:
The behavior of sched_yield changed for 2.6. I suppose the man
page didn't get updated.
Now I remember reading about that on LWN or maybe KernelTraffic.
Thanks!
I also think OpenLDAP is wrong. First, it should be calling
pthread_yield() because slapd is a
Joseph Fannin wrote:
On Thu, Aug 18, 2005 at 02:50:16AM +0200, Bernardo Innocenti wrote:
The relative timestamp reveals that slapd is spending 50ms
after yielding. Meanwhile, GCC is probably being scheduled
for a whole quantum.
Reading the man-page of sched_yield() it seems this isn't
the
78 matches
Mail list logo