On Mon, Mar 12, 2007 at 01:56:13PM -0700, David Miller wrote:
From: Pekka J Enberg [EMAIL PROTECTED]
Date: Mon, 12 Mar 2007 14:15:16 +0200 (EET)
On 3/9/07, David Miller [EMAIL PROTECTED] wrote:
The whole cahce-multipath subsystem has to have it's guts revamped for
proper error
On Sun, Nov 05, 2006 at 12:53:23AM +0100, Christoph Hellwig wrote:
On Sat, Nov 04, 2006 at 06:06:48PM -0500, Dave Jones wrote:
On Sat, Nov 04, 2006 at 11:56:29PM +0100, Christoph Hellwig wrote:
This will break the compile for !NUMA if someone ends up doing a bisect
and lands here as a
On Sat, May 20, 2006 at 01:51:45PM -0700, Chris Wedgwood wrote:
...
Anyhow, the code as-is hasn't been maintained for a long time except
for a few minor blips (I'm using hg's annotate to find those and have
included those people on the cc' list as presumably there are using
these features and
Following patch adds in node aware, device round robin ip multipathing.
It is based on multipath_drr.c, the multipath device round robin algorithm, and
is derived from it. This implementation maintians per node state table, and
round robins between interfaces on the same node. The
This patch checks device locality on every ip packet xmit.
In multipath configuration tcp connection to route association is done at
session startup time. The tcp session process is migrated to different nodes
after this association. This would mean a remote NIC is chosen for xmit,
although a
On Wed, Mar 08, 2006 at 04:32:58PM -0800, Andrew Morton wrote:
Ravikiran G Thirumalai [EMAIL PROTECTED] wrote:
On Wed, Mar 08, 2006 at 03:43:21PM -0800, Andrew Morton wrote:
Benjamin LaHaise [EMAIL PROTECTED] wrote:
I think it may make more sense to simply convert local_t
On Thu, Mar 09, 2006 at 07:14:26PM +1100, Nick Piggin wrote:
Ravikiran G Thirumalai wrote:
Here's a patch making x86_64 local_t to 64 bits like other 64 bit arches.
This keeps local_t unsigned long. (We can change it to signed value
along with other arches later in one go I guess
On Tue, Mar 07, 2006 at 06:16:02PM -0800, Andrew Morton wrote:
Ravikiran G Thirumalai [EMAIL PROTECTED] wrote:
+static inline int read_sockets_allocated(struct proto *prot)
+{
+ int total = 0;
+ int cpu;
+ for_each_cpu(cpu)
+ total += *per_cpu_ptr(prot
On Tue, Mar 07, 2006 at 07:22:34PM -0800, Andrew Morton wrote:
Ravikiran G Thirumalai [EMAIL PROTECTED] wrote:
The problem is percpu_counter_sum has to read all the cpus cachelines. If
we have to use percpu_counter_sum everywhere, then might as well use plain
per-cpu counters instead
On Wed, Mar 08, 2006 at 04:17:33PM -0500, Benjamin LaHaise wrote:
On Wed, Mar 08, 2006 at 01:07:26PM -0800, Ravikiran G Thirumalai wrote:
Last time I checked, all the major architectures had efficient local_t
implementations. Most of the RISC CPUs are able to do a load / store
conditional
Add percpu_counter_mod_bh for using these counters safely from
both softirq and process context.
Signed-off by: Pravin B. Shelar [EMAIL PROTECTED]
Signed-off by: Ravikiran G Thirumalai [EMAIL PROTECTED]
Signed-off by: Shai Fultheim [EMAIL PROTECTED]
Index: linux-2.6.16-rc5mm3/include/linux
Change struct proto-memory_allocated to a batching per-CPU counter
(percpu_counter) from an atomic_t. A batching counter is better than a
plain per-CPU counter as this field is read often.
Signed-off-by: Pravin B. Shelar [EMAIL PROTECTED]
Signed-off-by: Ravikiran Thirumalai [EMAIL PROTECTED]
Change the atomic_t sockets_allocated member of struct proto to a
per-cpu counter.
Signed-off-by: Pravin B. Shelar [EMAIL PROTECTED]
Signed-off-by: Ravikiran Thirumalai [EMAIL PROTECTED]
Signed-off-by: Shai Fultheim [EMAIL PROTECTED]
Index: linux-2.6.16-rc5mm3/include/net/sock.h
On Tue, Mar 07, 2006 at 06:14:22PM -0800, Andrew Morton wrote:
Ravikiran G Thirumalai [EMAIL PROTECTED] wrote:
- if (atomic_read(sk-sk_prot-memory_allocated)
sk-sk_prot-sysctl_mem[0]) {
+ if (percpu_counter_read(sk-sk_prot-memory_allocated)
+ sk-sk_prot
On Fri, Jan 27, 2006 at 03:01:06PM -0800, Andrew Morton wrote:
Ravikiran G Thirumalai [EMAIL PROTECTED] wrote:
If the benchmarks say that we need to. If we cannot observe any problems
in testing of existing code and if we can't demonstrate any benefit from
the patched code
On Fri, Jan 27, 2006 at 09:53:53AM +0100, Eric Dumazet wrote:
Ravikiran G Thirumalai a écrit :
Change the atomic_t sockets_allocated member of struct proto to a
per-cpu counter.
Signed-off-by: Pravin B. Shelar [EMAIL PROTECTED]
Signed-off-by: Ravikiran Thirumalai [EMAIL PROTECTED]
Signed
On Fri, Jan 27, 2006 at 12:16:02PM -0800, Andrew Morton wrote:
Ravikiran G Thirumalai [EMAIL PROTECTED] wrote:
which can be assumed as not frequent.
At sk_stream_mem_schedule(), read_sockets_allocated() is invoked only
certain conditions, under memory pressure -- on a large CPU count
On Fri, Jan 27, 2006 at 11:30:23PM +0100, Eric Dumazet wrote:
There are several issues here :
alloc_percpu() current implementation is a a waste of ram. (because it uses
slab allocations that have a minimum size of 32 bytes)
Oh there was a solution for that :).
On Fri, Jan 27, 2006 at 03:08:47PM -0800, Andrew Morton wrote:
Andrew Morton [EMAIL PROTECTED] wrote:
Oh, and because vm_acct_memory() is counting a singleton object, it can use
DEFINE_PER_CPU rather than alloc_percpu(), so it saves on a bit of kmalloc
overhead.
Actually, I don't think
On Sat, Jan 28, 2006 at 12:21:07AM +0100, Eric Dumazet wrote:
Ravikiran G Thirumalai a écrit :
On Fri, Jan 27, 2006 at 11:30:23PM +0100, Eric Dumazet wrote:
Why not use a boot time allocated percpu area (as done today in
setup_per_cpu_areas()), but instead of reserving extra space
On Sat, Jan 28, 2006 at 01:35:03AM +0100, Eric Dumazet wrote:
Eric Dumazet a écrit :
Andrew Morton a écrit :
Eric Dumazet [EMAIL PROTECTED] wrote:
#ifdef CONFIG_SMP
void percpu_counter_mod(struct percpu_counter *fbc, long amount)
{
long old, new;
atomic_long_t *pcount;
The following patches change struct proto.memory_allocated,
proto.sockets_allocated to use per-cpu counters. This patchset also switches
the proto.inuse percpu varible to use alloc_percpu, instead of NR_CPUS *
cacheline size padding.
We saw 5 % improvement in apache bench requests per second with
Change the atomic_t sockets_allocated member of struct proto to a
per-cpu counter.
Signed-off-by: Pravin B. Shelar [EMAIL PROTECTED]
Signed-off-by: Ravikiran Thirumalai [EMAIL PROTECTED]
Signed-off-by: Shai Fultheim [EMAIL PROTECTED]
Index: linux-2.6.16-rc1/include/net/sock.h
Change struct proto-memory_allocated to a batching per-CPU counter
(percpu_counter) from an atomic_t. A batching counter is better than a
plain per-CPU counter as this field is read often.
Signed-off-by: Pravin B. Shelar [EMAIL PROTECTED]
Signed-off-by: Ravikiran Thirumalai [EMAIL PROTECTED]
Add percpu_counter_mod_bh for using these counters safely from
both softirq and process context.
Signed-off by: Pravin B. Shelar [EMAIL PROTECTED]
Signed-off by: Ravikiran G Thirumalai [EMAIL PROTECTED]
Signed-off by: Shai Fultheim [EMAIL PROTECTED]
Index: linux-2.6.16-rc1/include/linux
25 matches
Mail list logo