Hi,
> 2.4.x has changed the scheduler behaviour so that the task that call
> sched_yield() is not rescheduled by the incoming schedule(). A flag is
> set ( under certain conditions in SMP ) and the goodness() calculation
> assign the lower value to the exiting task ( this flag is cleared in
>
On 11-Mar-2001 Dave Zarzycki wrote:
> On Mon, 12 Mar 2001, Anton Blanchard wrote:
>
>> Perhaps we need something like sched_yield that takes off some of
>> tsk->counter so the task with the spinlock will run earlier.
>
> Personally speaking, I wish sched_yield() API was like so:
>
> int
On 11-Mar-2001 Anton Blanchard wrote:
>
>> This is the linux thread spinlock acquire :
>>
>>
>> static void __pthread_acquire(int * spinlock)
>> {
>> int cnt = 0;
>> struct timespec tm;
>>
>> while (testandset(spinlock)) {
>> if (cnt < MAX_SPIN_COUNT) {
>> sched_yield();
>>
On 11-Mar-2001 Anton Blanchard wrote:
>
>> This is the linux thread spinlock acquire :
>>
>>
>> static void __pthread_acquire(int * spinlock)
>> {
>> int cnt = 0;
>> struct timespec tm;
>>
>> while (testandset(spinlock)) {
>> if (cnt < MAX_SPIN_COUNT) {
>> sched_yield();
>>
On Mon, 12 Mar 2001, Anton Blanchard wrote:
> Perhaps we need something like sched_yield that takes off some of
> tsk->counter so the task with the spinlock will run earlier.
Personally speaking, I wish sched_yield() API was like so:
int sched_yield(pid_t pid);
The pid argument would be
> This is the linux thread spinlock acquire :
>
>
> static void __pthread_acquire(int * spinlock)
> {
> int cnt = 0;
> struct timespec tm;
>
> while (testandset(spinlock)) {
> if (cnt < MAX_SPIN_COUNT) {
> sched_yield();
> cnt++;
> } else {
> tm.tv_sec = 0;
>
On 10-Mar-2001 Andi Kleen wrote:
> Davide Libenzi <[EMAIL PROTECTED]> writes:
>
>
>> Probably the rate at which is called sys_sched_yield() is not so high to let
>> the performance improvement to be measurable.
>
> LinuxThreads mutexes call sched_yield() when a lock is locked, so when you
>
On 10-Mar-2001 Andi Kleen wrote:
Davide Libenzi [EMAIL PROTECTED] writes:
Probably the rate at which is called sys_sched_yield() is not so high to let
the performance improvement to be measurable.
LinuxThreads mutexes call sched_yield() when a lock is locked, so when you
have a
This is the linux thread spinlock acquire :
static void __pthread_acquire(int * spinlock)
{
int cnt = 0;
struct timespec tm;
while (testandset(spinlock)) {
if (cnt MAX_SPIN_COUNT) {
sched_yield();
cnt++;
} else {
tm.tv_sec = 0;
tm.tv_nsec
On Mon, 12 Mar 2001, Anton Blanchard wrote:
Perhaps we need something like sched_yield that takes off some of
tsk-counter so the task with the spinlock will run earlier.
Personally speaking, I wish sched_yield() API was like so:
int sched_yield(pid_t pid);
The pid argument would be
On 11-Mar-2001 Anton Blanchard wrote:
This is the linux thread spinlock acquire :
static void __pthread_acquire(int * spinlock)
{
int cnt = 0;
struct timespec tm;
while (testandset(spinlock)) {
if (cnt MAX_SPIN_COUNT) {
sched_yield();
cnt++;
} else {
On 11-Mar-2001 Anton Blanchard wrote:
This is the linux thread spinlock acquire :
static void __pthread_acquire(int * spinlock)
{
int cnt = 0;
struct timespec tm;
while (testandset(spinlock)) {
if (cnt MAX_SPIN_COUNT) {
sched_yield();
cnt++;
} else {
On 11-Mar-2001 Dave Zarzycki wrote:
On Mon, 12 Mar 2001, Anton Blanchard wrote:
Perhaps we need something like sched_yield that takes off some of
tsk-counter so the task with the spinlock will run earlier.
Personally speaking, I wish sched_yield() API was like so:
int
Hi,
2.4.x has changed the scheduler behaviour so that the task that call
sched_yield() is not rescheduled by the incoming schedule(). A flag is
set ( under certain conditions in SMP ) and the goodness() calculation
assign the lower value to the exiting task ( this flag is cleared in
Davide Libenzi <[EMAIL PROTECTED]> writes:
> Probably the rate at which is called sys_sched_yield() is not so high to let
> the performance improvement to be measurable.
LinuxThreads mutexes call sched_yield() when a lock is locked, so when you
have a multithreaded program with some lock
On 10-Mar-2001 Mike Kravetz wrote:
> Any thoughts about adding a 'fast path' to the SMP code in
> sys_sched_yield. Why not compare nr_pending to smp_num_cpus
> before examining the aligned_data structures? Something like,
>
> if (nr_pending > smp_num_cpus)
> goto set_resched_now;
>
>
On 10-Mar-2001 Mike Kravetz wrote:
Any thoughts about adding a 'fast path' to the SMP code in
sys_sched_yield. Why not compare nr_pending to smp_num_cpus
before examining the aligned_data structures? Something like,
if (nr_pending smp_num_cpus)
goto set_resched_now;
Where
Davide Libenzi [EMAIL PROTECTED] writes:
Probably the rate at which is called sys_sched_yield() is not so high to let
the performance improvement to be measurable.
LinuxThreads mutexes call sched_yield() when a lock is locked, so when you
have a multithreaded program with some lock
Any thoughts about adding a 'fast path' to the SMP code in
sys_sched_yield. Why not compare nr_pending to smp_num_cpus
before examining the aligned_data structures? Something like,
if (nr_pending > smp_num_cpus)
goto set_resched_now;
Where set_resched_now is a label placed just before
Any thoughts about adding a 'fast path' to the SMP code in
sys_sched_yield. Why not compare nr_pending to smp_num_cpus
before examining the aligned_data structures? Something like,
if (nr_pending smp_num_cpus)
goto set_resched_now;
Where set_resched_now is a label placed just before
20 matches
Mail list logo