Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-12-01 Thread David Miller
From: Evgeniy Polyakov <[EMAIL PROTECTED]> Date: Fri, 1 Dec 2006 12:53:07 +0300 > Isn't it a step in direction of full tcp processing bound to process > context? :) :-) Rather, it is just finer grained locking. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-12-01 Thread Evgeniy Polyakov
On Thu, Nov 30, 2006 at 12:14:43PM -0800, David Miller ([EMAIL PROTECTED]) wrote: > > It steals timeslices from other processes to complete tcp_recvmsg() > > task, and only when it does it for too long, it will be preempted. > > Processing backlog queue on behalf of need_resched() will break > > f

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread David Miller
From: Ingo Molnar <[EMAIL PROTECTED]> Date: Thu, 30 Nov 2006 21:49:08 +0100 > So i dont support the scheme proposed here, the blatant bending of the > priority scale towards the TCP workload. I don't support this scheme either ;-) That's why my proposal is to find a way to allow input packet pr

RE: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread Wenji Wu
>if you still have the test-setup, could you nevertheless try setting the >priority of the receiving TCP task to nice -20 and see what kind of >performance you get? A process with nice of -20 can easily get the interactivity status. When it expires, it still go back to the active array. It just h

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread Ingo Molnar
* Ingo Molnar <[EMAIL PROTECTED]> wrote: > [...] Instead what i'd like to see is more TCP performance (and a > nicer over-the-wire behavior - no retransmits for example) /with the > same 10% CPU time used/. Are we in rough agreement? put in another way: i'd like to see the "TCP bytes transferr

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread Ingo Molnar
* David Miller <[EMAIL PROTECTED]> wrote: > > disk I/O is typically not CPU bound, and i believe these TCP tests > > /are/ CPU-bound. Otherwise there would be no expiry of the timeslice > > to begin with and the TCP receiver task would always be boosted to > > 'interactive' status by the sched

RE: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread Wenji Wu
> It steals timeslices from other processes to complete tcp_recvmsg() > task, and only when it does it for too long, it will be preempted. > Processing backlog queue on behalf of need_resched() will break > fairness too - processing itself can take a lot of time, so process > can be scheduled away

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread David Miller
From: Ingo Molnar <[EMAIL PROTECTED]> Date: Thu, 30 Nov 2006 21:30:26 +0100 > disk I/O is typically not CPU bound, and i believe these TCP tests /are/ > CPU-bound. Otherwise there would be no expiry of the timeslice to begin > with and the TCP receiver task would always be boosted to 'interactiv

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread Ingo Molnar
* David Miller <[EMAIL PROTECTED]> wrote: > I want to point out something which is slightly misleading about this > kind of analysis. > > Your disk I/O speed doesn't go down by a factor of 10 just because 9 > other non disk I/O tasks are running, yet for TCP that's seemingly OK > :-) disk I/O

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread Ingo Molnar
* Wenji Wu <[EMAIL PROTECTED]> wrote: > >The solution is really simple and needs no kernel change at all: if > >you want the TCP receiver to get a larger share of timeslices then > >either renice it to -20 or renice the other tasks to +19. > > Simply give a larger share of timeslices to the TC

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread David Miller
From: Evgeniy Polyakov <[EMAIL PROTECTED]> Date: Thu, 30 Nov 2006 13:22:06 +0300 > It steals timeslices from other processes to complete tcp_recvmsg() > task, and only when it does it for too long, it will be preempted. > Processing backlog queue on behalf of need_resched() will break > fairness t

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread David Miller
From: Ingo Molnar <[EMAIL PROTECTED]> Date: Thu, 30 Nov 2006 11:32:40 +0100 > Note that even without the change the TCP receiving task is already > getting a disproportionate share of cycles due to softirq processing! > Under a load of 10.0 it went from 500 mbits to 74 mbits, while the > 'fair'

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread David Miller
From: Wenji Wu <[EMAIL PROTECTED]> Date: Thu, 30 Nov 2006 10:08:22 -0600 > If the higher prioirty processes become runnable (e.g., interactive > process), you better yield the CPU, instead of continuing this process. If > it is the case that the process within tcp_recvmsg() is expriring, then, you

RE: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread Wenji Wu
>The solution is really simple and needs no kernel change at all: if you >want the TCP receiver to get a larger share of timeslices then either >renice it to -20 or renice the other tasks to +19. Simply give a larger share of timeslices to the TCP receiver won't solve the problem. No matter what

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread Lee Revell
On Thu, 2006-11-30 at 09:33 +, Christoph Hellwig wrote: > On Wed, Nov 29, 2006 at 07:56:58PM -0600, Wenji Wu wrote: > > Yes, when CONFIG_PREEMPT is disabled, the "problem" won't happen. That is > > why I put "for 2.6 desktop, low-latency desktop" in the uploaded paper. > > This "problem" happ

RE: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread Wenji Wu
>We can make explicitl preemption checks in the main loop of >tcp_recvmsg(), and release the socket and run the backlog if >need_resched() is TRUE. >This is the simplest and most elegant solution to this problem. I am not sure whether this approach will work. How can you make the explicit pree

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread Ingo Molnar
* Evgeniy Polyakov <[EMAIL PROTECTED]> wrote: > > David's line of thinking for a solution sounds better to me. This > > patch does not prevent the process from being preempted (for > > potentially a long time), by any means. > > It steals timeslices from other processes to complete tcp_recvmsg

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread Evgeniy Polyakov
On Thu, Nov 30, 2006 at 09:07:42PM +1100, Nick Piggin ([EMAIL PROTECTED]) wrote: > >Doesn't the provided solution is just a in-kernel variant of the > >SCHED_FIFO set from userspace? Why kernel should be able to mark some > >users as having higher priority? > >What if workload of the system is targ

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread Nick Piggin
Evgeniy Polyakov wrote: On Thu, Nov 30, 2006 at 08:35:04AM +0100, Ingo Molnar ([EMAIL PROTECTED]) wrote: Doesn't the provided solution is just a in-kernel variant of the SCHED_FIFO set from userspace? Why kernel should be able to mark some users as having higher priority? What if workload of t

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread Evgeniy Polyakov
On Thu, Nov 30, 2006 at 08:35:04AM +0100, Ingo Molnar ([EMAIL PROTECTED]) wrote: > what was observed here were the effects of completely throttling TCP > processing for a given socket. I think such throttling can in fact be > desirable: there is a /reason/ why the process context was preempted: i

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread Christoph Hellwig
On Wed, Nov 29, 2006 at 07:56:58PM -0600, Wenji Wu wrote: > Yes, when CONFIG_PREEMPT is disabled, the "problem" won't happen. That is why > I put "for 2.6 desktop, low-latency desktop" in the uploaded paper. This > "problem" happens in the 2.6 Desktop and Low-latency Desktop. CONFIG_PREEMPT is o

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread Ingo Molnar
* David Miller <[EMAIL PROTECTED]> wrote: > > furthermore, the tweak allows the shifting of processing from a > > prioritized process context into a highest-priority softirq context. > > (it's not proven that there is any significant /net win/ of > > performance: all that was proven is that if

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread David Miller
From: Ingo Molnar <[EMAIL PROTECTED]> Date: Thu, 30 Nov 2006 07:47:58 +0100 > furthermore, the tweak allows the shifting of processing from a > prioritized process context into a highest-priority softirq context. > (it's not proven that there is any significant /net win/ of performance: > all t

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread Ingo Molnar
* David Miller <[EMAIL PROTECTED]> wrote: > This is why my suggestion is to preempt_disable() as soon as we grab > the socket lock, [...] independently of the issue at hand, in general the explicit use of preempt_disable() in non-infrastructure code is quite a heavy tool. Its effects are heav

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread Ingo Molnar
* David Miller <[EMAIL PROTECTED]> wrote: > > yeah, i like this one. If the problem is "too long locked section", > > then the most natural solution is to "break up the lock", not to > > "boost the priority of the lock-holding task" (which is what the > > proposed patch does). > > Ingo you're

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread David Miller
From: Ingo Molnar <[EMAIL PROTECTED]> Date: Thu, 30 Nov 2006 07:17:58 +0100 > > * David Miller <[EMAIL PROTECTED]> wrote: > > > We can make explicitl preemption checks in the main loop of > > tcp_recvmsg(), and release the socket and run the backlog if > > need_resched() is TRUE. > > > > This

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread Ingo Molnar
* Wenji Wu <[EMAIL PROTECTED]> wrote: > > That yield() will need to be removed - yield()'s behaviour is truly > > awfulif the system is otherwise busy. What is it there for? > > Please read the uploaded paper, which has detailed description. do you have any URL for that? Ingo - To un

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread Ingo Molnar
* David Miller <[EMAIL PROTECTED]> wrote: > We can make explicitl preemption checks in the main loop of > tcp_recvmsg(), and release the socket and run the backlog if > need_resched() is TRUE. > > This is the simplest and most elegant solution to this problem. yeah, i like this one. If the pr

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread Mike Galbraith
On Wed, 2006-11-29 at 17:08 -0800, Andrew Morton wrote: > + if (p->backlog_flag == 0) { > + if (!TASK_INTERACTIVE(p) || expired_starving(rq)) { > + enqueue_task(p, rq->expired); > + if (p->static_prio < rq->best

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread David Miller
From: Wenji Wu <[EMAIL PROTECTED]> Date: Wed, 29 Nov 2006 19:56:58 -0600 > >We could also pepper tcp_recvmsg() with some very carefully placed > >preemption disable/enable calls to deal with this even with > >CONFIG_PREEMPT enabled. > > I also think about this approach. But since the "problem" hap

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread Wenji Wu
OTECTED]> Date: Wednesday, November 29, 2006 7:08 pm Subject: Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP > On Wed, 29 Nov 2006 16:53:11 -0800 (PST) > David Miller <[EMAIL PROTECTED]> wrote: > > > > > Please, it is very difficult to review your work th

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread Wenji Wu
m: David Miller <[EMAIL PROTECTED]> Date: Wednesday, November 29, 2006 7:13 pm Subject: Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP > From: Andrew Morton <[EMAIL PROTECTED]> > Date: Wed, 29 Nov 2006 17:08:35 -0800 > > > On Wed, 29 Nov 2006 16:53:11 -0800

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread David Miller
From: Andrew Morton <[EMAIL PROTECTED]> Date: Wed, 29 Nov 2006 17:08:35 -0800 > On Wed, 29 Nov 2006 16:53:11 -0800 (PST) > David Miller <[EMAIL PROTECTED]> wrote: > > > > > Please, it is very difficult to review your work the way you have > > submitted this patch as a set of 4 patches. These pa

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread Andrew Morton
On Wed, 29 Nov 2006 16:53:11 -0800 (PST) David Miller <[EMAIL PROTECTED]> wrote: > > Please, it is very difficult to review your work the way you have > submitted this patch as a set of 4 patches. These patches have not > been split up "logically", but rather they have been split up "per > file"

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread David Miller
Please, it is very difficult to review your work the way you have submitted this patch as a set of 4 patches. These patches have not been split up "logically", but rather they have been split up "per file" with the same exact changelog message in each patch posting. This is very clumsy, and impos