On 01/19/2016 07:08 PM, tip-bot for Raghavendra K T wrote:
Commit-ID: 9c03ee147193645be4c186d3688232fa438c57c7
Gitweb: http://git.kernel.org/tip/9c03ee147193645be4c186d3688232fa438c57c7
Author: Raghavendra K T
AuthorDate: Sat, 16 Jan 2016 00:31:23 +0530
Committer: Ingo Molnar
On 01/19/2016 07:08 PM, tip-bot for Raghavendra K T wrote:
Commit-ID: 9c03ee147193645be4c186d3688232fa438c57c7
Gitweb: http://git.kernel.org/tip/9c03ee147193645be4c186d3688232fa438c57c7
Author: Raghavendra K T <raghavendra...@linux.vnet.ibm.com>
AuthorDate: Sat, 16 Jan 2016 00
use local_memory_node(), which is guaranteed to have memory.
local_memory_node is a noop in other architectures that does not support
memoryless nodes.
Signed-off-by: Raghavendra K T
---
block/blk-mq-cpumap.c | 2 +-
block/blk-mq.c| 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
Valida
hctx->cpumask is already populated and let the tag cpumask follow that
instead of going through a new for loop.
Signed-off-by: Raghavendra K T
---
block/blk-mq.c | 9 +
1 file changed, 1 insertion(+), 8 deletions(-)
Nish had suggested to put cpumask_copy after WARN_ON (inst
use local_memory_node(), which is guaranteed to have memory.
local_memory_node is a noop in other architectures that does not support
memoryless nodes.
Signed-off-by: Raghavendra K T <raghavendra...@linux.vnet.ibm.com>
---
block/blk-mq-cpumap.c | 2 +-
block/blk-mq.c| 2 +-
2 files changed, 2 i
hctx->cpumask is already populated and let the tag cpumask follow that
instead of going through a new for loop.
Signed-off-by: Raghavendra K T <raghavendra...@linux.vnet.ibm.com>
---
block/blk-mq.c | 9 +
1 file changed, 1 insertion(+), 8 deletions(-)
Nish had suggest
On 11/24/2015 02:43 AM, Tejun Heo wrote:
Hello,
On Thu, Nov 19, 2015 at 03:54:35PM +0530, Raghavendra K T wrote:
While I was creating thousands of docker container on a power8 baremetal
(config: 4.3.0 kernel 1TB RAM, 20core (=160 cpu) system. After creating
around 5600 container
I have hit
On 11/24/2015 02:43 AM, Tejun Heo wrote:
Hello,
On Thu, Nov 19, 2015 at 03:54:35PM +0530, Raghavendra K T wrote:
While I was creating thousands of docker container on a power8 baremetal
(config: 4.3.0 kernel 1TB RAM, 20core (=160 cpu) system. After creating
around 5600 container
I have hit
On 11/24/2015 02:43 AM, Tejun Heo wrote:
Hello,
On Thu, Nov 19, 2015 at 03:54:35PM +0530, Raghavendra K T wrote:
While I was creating thousands of docker container on a power8 baremetal
(config: 4.3.0 kernel 1TB RAM, 20core (=160 cpu) system. After creating
around 5600 container
I have hit
On 11/24/2015 02:43 AM, Tejun Heo wrote:
Hello,
On Thu, Nov 19, 2015 at 03:54:35PM +0530, Raghavendra K T wrote:
While I was creating thousands of docker container on a power8 baremetal
(config: 4.3.0 kernel 1TB RAM, 20core (=160 cpu) system. After creating
around 5600 container
I have hit
Hi,
While I was creating thousands of docker container on a power8 baremetal
(config: 4.3.0 kernel 1TB RAM, 20core (=160 cpu) system. After creating
around 5600 container
I have hit below problem.
[This is looking similar to
https://bugzilla.kernel.org/show_bug.cgi?id=101011, but
kernel had
Hi,
While I was creating thousands of docker container on a power8 baremetal
(config: 4.3.0 kernel 1TB RAM, 20core (=160 cpu) system. After creating
around 5600 container
I have hit below problem.
[This is looking similar to
https://bugzilla.kernel.org/show_bug.cgi?id=101011, but
kernel had
On 10/13/2015 10:17 PM, Jeff Moyer wrote:
Raghavendra K T writes:
In nr_hw_queues >1 cases when certain number of cpus are onlined/or
offlined, that results change in request_queue map in block-mq layer,
we see the kernel dumping like:
What version is that patch against? This prob
On 10/13/2015 10:17 PM, Jeff Moyer wrote:
Raghavendra K T <raghavendra...@linux.vnet.ibm.com> writes:
In nr_hw_queues >1 cases when certain number of cpus are onlined/or
offlined, that results change in request_queue map in block-mq layer,
we see the kernel dumping like:
Wha
es where new mapping does not cause problem.
That is also fixed with this change.
This problem is originally found in powervm which had 160 cpus (SMT8),
128 nr_hw_queues. The dump was easily reproduced with offlining last core
and it has been a blocker issue because cpu hotplug is a common case for
DLPAR.
es where new mapping does not cause problem.
That is also fixed with this change.
This problem is originally found in powervm which had 160 cpus (SMT8),
128 nr_hw_queues. The dump was easily reproduced with offlining last core
and it has been a blocker issue because cpu hotplug is a common case for
DLPAR.
On 10/06/2015 03:55 PM, Michael Ellerman wrote:
On Sun, 2015-09-27 at 23:59 +0530, Raghavendra K T wrote:
Problem description:
Powerpc has sparse node numbering, i.e. on a 4 node system nodes are
numbered (possibly) as 0,1,16,17. At a lower level, we map the chipid
got from device tree
On 10/06/2015 03:47 PM, Michael Ellerman wrote:
On Sun, 2015-27-09 at 18:29:09 UTC, Raghavendra K T wrote:
We access numa_cpu_lookup_table array directly in all the places
to read/update numa cpu lookup information. Instead use a helper
function to update.
This is helpful in changing the way
On 10/06/2015 03:47 PM, Michael Ellerman wrote:
On Sun, 2015-27-09 at 18:29:09 UTC, Raghavendra K T wrote:
We access numa_cpu_lookup_table array directly in all the places
to read/update numa cpu lookup information. Instead use a helper
function to update.
This is helpful in changing the way
On 10/06/2015 03:55 PM, Michael Ellerman wrote:
On Sun, 2015-09-27 at 23:59 +0530, Raghavendra K T wrote:
Problem description:
Powerpc has sparse node numbering, i.e. on a 4 node system nodes are
numbered (possibly) as 0,1,16,17. At a lower level, we map the chipid
got from device tree
On 09/30/2015 01:16 AM, Denis Kirjanov wrote:
On 9/29/15, Raghavendra K T wrote:
On 09/28/2015 10:34 PM, Nishanth Aravamudan wrote:
On 28.09.2015 [13:44:42 +0300], Denis Kirjanov wrote:
On 9/27/15, Raghavendra K T wrote:
Problem description:
Powerpc has sparse node numbering, i.e. on a 4
On 09/30/2015 01:16 AM, Denis Kirjanov wrote:
On 9/29/15, Raghavendra K T <raghavendra...@linux.vnet.ibm.com> wrote:
On 09/28/2015 10:34 PM, Nishanth Aravamudan wrote:
On 28.09.2015 [13:44:42 +0300], Denis Kirjanov wrote:
On 9/27/15, Raghavendra K T <raghavendra...@linux.vnet.ibm.c
On 09/28/2015 11:05 PM, Nishanth Aravamudan wrote:
On 27.09.2015 [23:59:11 +0530], Raghavendra K T wrote:
Once we have made the distinction between nid and chipid
create a 1:1 mapping between them. This makes compacting the
nids easy later.
No functionality change.
Signed-off-by: Raghavendra
On 09/28/2015 11:04 PM, Nishanth Aravamudan wrote:
On 27.09.2015 [23:59:08 +0530], Raghavendra K T wrote:
[...]
2) Map the sparse chipid got from device tree to a serial nid at kernel
level (The idea proposed in this series).
Pro: It is more natural to handle at kernel level than at lower
On 09/28/2015 11:02 PM, Nishanth Aravamudan wrote:
On 27.09.2015 [23:59:12 +0530], Raghavendra K T wrote:
Create arrays that maps serial nids and sparse chipids.
Note: My original idea had only two arrays of chipid to nid map. Final
code is inspired by driver/acpi/numa.c that maps a proximity
On 09/28/2015 10:58 PM, Nishanth Aravamudan wrote:
On 27.09.2015 [23:59:11 +0530], Raghavendra K T wrote:
Once we have made the distinction between nid and chipid
create a 1:1 mapping between them. This makes compacting the
nids easy later.
Didn't the previous patch just do the opposite
On 09/28/2015 10:57 PM, Nishanth Aravamudan wrote:
On 27.09.2015 [23:59:10 +0530], Raghavendra K T wrote:
There is no change in the fuctionality
Signed-off-by: Raghavendra K T
---
arch/powerpc/mm/numa.c | 42 +-
1 file changed, 21 insertions(+), 21
On 09/28/2015 10:34 PM, Nishanth Aravamudan wrote:
On 28.09.2015 [13:44:42 +0300], Denis Kirjanov wrote:
On 9/27/15, Raghavendra K T wrote:
Problem description:
Powerpc has sparse node numbering, i.e. on a 4 node system nodes are
numbered (possibly) as 0,1,16,17. At a lower level, we map
On 09/28/2015 10:57 PM, Nishanth Aravamudan wrote:
On 27.09.2015 [23:59:10 +0530], Raghavendra K T wrote:
There is no change in the fuctionality
Signed-off-by: Raghavendra K T <raghavendra...@linux.vnet.ibm.com>
---
arch/powerpc/mm/numa.c | 42 +---
On 09/28/2015 10:34 PM, Nishanth Aravamudan wrote:
On 28.09.2015 [13:44:42 +0300], Denis Kirjanov wrote:
On 9/27/15, Raghavendra K T <raghavendra...@linux.vnet.ibm.com> wrote:
Problem description:
Powerpc has sparse node numbering, i.e. on a 4 node system nodes are
numbered (po
On 09/28/2015 10:58 PM, Nishanth Aravamudan wrote:
On 27.09.2015 [23:59:11 +0530], Raghavendra K T wrote:
Once we have made the distinction between nid and chipid
create a 1:1 mapping between them. This makes compacting the
nids easy later.
Didn't the previous patch just do the opposite
On 09/28/2015 11:05 PM, Nishanth Aravamudan wrote:
On 27.09.2015 [23:59:11 +0530], Raghavendra K T wrote:
Once we have made the distinction between nid and chipid
create a 1:1 mapping between them. This makes compacting the
nids easy later.
No functionality change.
Signed-off-by: Raghavendra
On 09/28/2015 11:02 PM, Nishanth Aravamudan wrote:
On 27.09.2015 [23:59:12 +0530], Raghavendra K T wrote:
Create arrays that maps serial nids and sparse chipids.
Note: My original idea had only two arrays of chipid to nid map. Final
code is inspired by driver/acpi/numa.c that maps a proximity
On 09/28/2015 11:04 PM, Nishanth Aravamudan wrote:
On 27.09.2015 [23:59:08 +0530], Raghavendra K T wrote:
[...]
2) Map the sparse chipid got from device tree to a serial nid at kernel
level (The idea proposed in this series).
Pro: It is more natural to handle at kernel level than at lower
On 09/27/2015 11:59 PM, Raghavendra K T wrote:
We access numa_cpu_lookup_table array directly in all the places
to read/update numa cpu lookup information. Instead use a helper
function to update.
This is helpful in changing the way numa<-->cpu mapping in single
place when
Once we have made the distinction between nid and chipid
create a 1:1 mapping between them. This makes compacting the
nids easy later.
No functionality change.
Signed-off-by: Raghavendra K T
---
arch/powerpc/mm/numa.c | 36 +---
1 file changed, 29 insertions
: cleanup patches
patch 4: Adds helper function to map nid and chipid
patch 5: Uses the mapping to get serial nid
Raghavendra K T (5):
powerpc:numa Add numa_cpu_lookup function to update lookup table
powerpc:numa Rename functions referring to nid as chipid
powerpc:numa create 1:1 mappaing
and cpus
2) Running the tests from numactl source.
3) Creating 1000s of docker containers stressing the system
Signed-off-by: Raghavendra K T
---
arch/powerpc/mm/numa.c | 13 -
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm
nality.
Signed-off-by: Raghavendra K T
---
arch/powerpc/include/asm/mmzone.h | 2 +-
arch/powerpc/kernel/smp.c | 10 +-
arch/powerpc/mm/numa.c| 28 +---
3 files changed, 23 insertions(+), 17 deletions(-)
diff --git a/arch/powerpc/include/asm/mmzon
in first unused
nid easily by knowing first unset bit in the mask.
No change in functionality.
Signed-off-by: Raghavendra K T
---
arch/powerpc/mm/numa.c | 48 +++-
1 file changed, 47 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/numa.c b
There is no change in the fuctionality
Signed-off-by: Raghavendra K T
---
arch/powerpc/mm/numa.c | 42 +-
1 file changed, 21 insertions(+), 21 deletions(-)
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index d5e6eee..f84ed2f 100644
and cpus
2) Running the tests from numactl source.
3) Creating 1000s of docker containers stressing the system
Signed-off-by: Raghavendra K T <raghavendra...@linux.vnet.ibm.com>
---
arch/powerpc/mm/numa.c | 13 -
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git
There is no change in the fuctionality
Signed-off-by: Raghavendra K T <raghavendra...@linux.vnet.ibm.com>
---
arch/powerpc/mm/numa.c | 42 +-
1 file changed, 21 insertions(+), 21 deletions(-)
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/
nality.
Signed-off-by: Raghavendra K T <raghavendra...@linux.inet.ibm.com>
---
arch/powerpc/include/asm/mmzone.h | 2 +-
arch/powerpc/kernel/smp.c | 10 +-
arch/powerpc/mm/numa.c| 28 +---
3 files changed, 23 insertions(+), 17 deletions(-)
diff
id_map nodemask. The mask helps in first unused
nid easily by knowing first unset bit in the mask.
No change in functionality.
Signed-off-by: Raghavendra K T <raghavendra...@linux.vnet.ibm.com>
---
arch/powerpc/mm/numa.c | 48 +++-
1 file changed, 4
: cleanup patches
patch 4: Adds helper function to map nid and chipid
patch 5: Uses the mapping to get serial nid
Raghavendra K T (5):
powerpc:numa Add numa_cpu_lookup function to update lookup table
powerpc:numa Rename functions referring to nid as chipid
powerpc:numa create 1:1 mappaing
Once we have made the distinction between nid and chipid
create a 1:1 mapping between them. This makes compacting the
nids easy later.
No functionality change.
Signed-off-by: Raghavendra K T <raghavendra...@linux.vnet.ibm.com>
---
arch/powerpc/mm/numa.
On 09/27/2015 11:59 PM, Raghavendra K T wrote:
We access numa_cpu_lookup_table array directly in all the places
to read/update numa cpu lookup information. Instead use a helper
function to update.
This is helpful in changing the way numa<-->cpu mapping in single
place when
* Michael Ellerman [2015-09-22 15:29:03]:
> On Tue, 2015-09-15 at 07:38 +0530, Raghavendra K T wrote:
> >
> > ... nothing
>
> Sure this patch looks obvious, but please give me a changelog that proves
> you've thought about it thoroughly.
>
> For example
On 09/22/2015 10:59 AM, Michael Ellerman wrote:
On Tue, 2015-09-15 at 07:38 +0530, Raghavendra K T wrote:
... nothing
Sure this patch looks obvious, but please give me a changelog that proves
you've thought about it thoroughly.
For example is it OK to use for_each_node() at this point
* Michael Ellerman <m...@ellerman.id.au> [2015-09-22 15:29:03]:
> On Tue, 2015-09-15 at 07:38 +0530, Raghavendra K T wrote:
> >
> > ... nothing
>
> Sure this patch looks obvious, but please give me a changelog that proves
> you've thought about it thoroughly.
&g
On 09/22/2015 10:59 AM, Michael Ellerman wrote:
On Tue, 2015-09-15 at 07:38 +0530, Raghavendra K T wrote:
... nothing
Sure this patch looks obvious, but please give me a changelog that proves
you've thought about it thoroughly.
For example is it OK to use for_each_node() at this point
On 09/15/2015 07:38 AM, Raghavendra K T wrote:
The functions used in the patch are in slowpath, which gets called
whenever alloc_super is called during mounts.
Though this should not make difference for the architectures with
sequential numa node ids, for the powerpc which can potentially have
)
- Add comment that node 0 should always be present (Vladimir)
Raghavendra K T (2):
mm: Replace nr_node_ids for loop with for_each_node in list lru
powerpc:numa Do not allocate bootmem memory for non existing nodes
arch/powerpc/mm/numa.c | 2 +-
mm/list_lru.c | 34
Signed-off-by: Raghavendra K T
---
arch/powerpc/mm/numa.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index 8b9502a..8d8a541 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/numa.c
@@ -80,7 +80,7 @@ static void
numa ids, 0,1,16,17
is common), this patch saves some unnecessary allocations for
non existing numa nodes.
Even without that saving, perhaps patch makes code more readable.
[ Take memcg_aware check outside for_each loop: Vldimir]
Signed-off-by: Raghavendra K T
---
mm/list_lru.c | 34
On 09/14/2015 05:34 PM, Vladimir Davydov wrote:
On Mon, Sep 14, 2015 at 05:09:31PM +0530, Raghavendra K T wrote:
On 09/14/2015 02:30 PM, Vladimir Davydov wrote:
On Wed, Sep 09, 2015 at 12:01:46AM +0530, Raghavendra K T wrote:
The functions used in the patch are in slowpath, which gets called
On 09/14/2015 02:30 PM, Vladimir Davydov wrote:
Hi,
On Wed, Sep 09, 2015 at 12:01:46AM +0530, Raghavendra K T wrote:
The functions used in the patch are in slowpath, which gets called
whenever alloc_super is called during mounts.
Though this should not make difference for the architectures
On 09/14/2015 02:30 PM, Vladimir Davydov wrote:
Hi,
On Wed, Sep 09, 2015 at 12:01:46AM +0530, Raghavendra K T wrote:
The functions used in the patch are in slowpath, which gets called
whenever alloc_super is called during mounts.
Though this should not make difference for the architectures
On 09/14/2015 05:34 PM, Vladimir Davydov wrote:
On Mon, Sep 14, 2015 at 05:09:31PM +0530, Raghavendra K T wrote:
On 09/14/2015 02:30 PM, Vladimir Davydov wrote:
On Wed, Sep 09, 2015 at 12:01:46AM +0530, Raghavendra K T wrote:
The functions used in the patch are in slowpath, which gets called
)
- Add comment that node 0 should always be present (Vladimir)
Raghavendra K T (2):
mm: Replace nr_node_ids for loop with for_each_node in list lru
powerpc:numa Do not allocate bootmem memory for non existing nodes
arch/powerpc/mm/numa.c | 2 +-
mm/list_lru.c | 34
Signed-off-by: Raghavendra K T <raghavendra...@linux.vnet.ibm.com>
---
arch/powerpc/mm/numa.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index 8b9502a..8d8a541 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/
On 09/15/2015 07:38 AM, Raghavendra K T wrote:
The functions used in the patch are in slowpath, which gets called
whenever alloc_super is called during mounts.
Though this should not make difference for the architectures with
sequential numa node ids, for the powerpc which can potentially have
numa ids, 0,1,16,17
is common), this patch saves some unnecessary allocations for
non existing numa nodes.
Even without that saving, perhaps patch makes code more readable.
[ Take memcg_aware check outside for_each loop: Vldimir]
Signed-off-by: Raghavendra K T <raghavendra...@linux.vnet.ibm.
numa ids, 0,1,16,17
is common), this patch saves some unnecessary allocations for
non existing numa nodes.
Even without that saving, perhaps patch makes code more readable.
Signed-off-by: Raghavendra K T
---
mm/list_lru.c | 23 +++
1 file changed, 15 insertions(+), 8 deletions
Signed-off-by: Raghavendra K T
---
arch/powerpc/mm/numa.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index 8b9502a..8d8a541 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/numa.c
@@ -80,7 +80,7 @@ static void
with for_each_node so that allocations happen only for
existing numa nodes.
Please note that, though there are many places where nr_node_ids is used,
current patchset uses for_each_node only for slowpath to avoid find_next_bit
traversal.
Raghavendra K T (2):
mm: Replace nr_node_ids for loop with for_each_node
with for_each_node so that allocations happen only for
existing numa nodes.
Please note that, though there are many places where nr_node_ids is used,
current patchset uses for_each_node only for slowpath to avoid find_next_bit
traversal.
Raghavendra K T (2):
mm: Replace nr_node_ids for loop with for_each_node
Signed-off-by: Raghavendra K T <raghavendra...@linux.vnet.ibm.com>
---
arch/powerpc/mm/numa.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index 8b9502a..8d8a541 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/
numa ids, 0,1,16,17
is common), this patch saves some unnecessary allocations for
non existing numa nodes.
Even without that saving, perhaps patch makes code more readable.
Signed-off-by: Raghavendra K T <raghavendra...@linux.vnet.ibm.com>
---
mm/list_lru.c | 23 +++
Signed-off-by: Raghavendra K T
---
include/net/ip.h | 10 ++
net/ipv4/af_inet.c | 41 +++--
2 files changed, 37 insertions(+), 14 deletions(-)
diff --git a/include/net/ip.h b/include/net/ip.h
index d5fe9f2..93bf12e 100644
--- a/include/net/ip.h
Signed-off-by: Raghavendra K T raghavendra...@linux.vnet.ibm.com
---
include/net/ip.h | 10 ++
net/ipv4/af_inet.c | 41 +++--
2 files changed, 37 insertions(+), 14 deletions(-)
diff --git a/include/net/ip.h b/include/net/ip.h
index d5fe9f2..93bf12e
[kernel.kallsyms] [k] veth_stats_one
changes/ideas suggested:
Using buffer in stack (Eric), Usage of memset (David), Using memcpy in
place of unaligned_put (Joe).
Signed-off-by: Raghavendra K T
---
net/ipv6/addrconf.c | 26 --
1 file changed, 16 insertions(+), 10 deletions
ses: 1.41 %
Please let me know if you have suggestions/comments.
Thanks Eric, Joe and David for the comments.
Raghavendra K T (2):
net: Introduce helper functions to get the per cpu data
net: Optimize snmp stat aggregation by walking all the percpu data at
once
include/net/ip.h| 10 ++
On 08/29/2015 08:51 PM, Joe Perches wrote:
On Sat, 2015-08-29 at 07:32 -0700, Eric Dumazet wrote:
On Sat, 2015-08-29 at 14:37 +0530, Raghavendra K T wrote:
static inline void __snmp6_fill_stats64(u64 *stats, void __percpu *mib,
- int items, int bytes
Signed-off-by: Raghavendra K T
---
include/net/ip.h | 10 ++
net/ipv4/af_inet.c | 41 +++--
2 files changed, 37 insertions(+), 14 deletions(-)
diff --git a/include/net/ip.h b/include/net/ip.h
index d5fe9f2..93bf12e 100644
--- a/include/net/ip.h
[kernel.kallsyms] [k] veth_stats_one
changes/ideas suggested:
Using buffer in stack (Eric), Usage of memset (David), Using memcpy in
place of unaligned_put (Joe).
Signed-off-by: Raghavendra K T
---
net/ipv6/addrconf.c | 22 +-
1 file changed, 13 insertions(+), 9 deletions(-)
Changes
docker[.] strings.FieldsFunc
cache-misses: 1.41 %
Please let me know if you have suggestions/comments.
Thanks Eric, Joe and David for comments on V1 and V2.
Raghavendra K T (2):
net: Introduce helper functio
On 08/29/2015 10:41 AM, David Miller wrote:
From: Raghavendra K T
Date: Sat, 29 Aug 2015 08:27:15 +0530
resending the patch with memset. Please let me know if you want to
resend all the patches.
Do not post patches as replies to existing discussion threads.
Instead, make a new, fresh
On 08/29/2015 08:56 AM, Eric Dumazet wrote:
On Sat, 2015-08-29 at 08:27 +0530, Raghavendra K T wrote:
/* Use put_unaligned() because stats may not be aligned for u64. */
put_unaligned(items, [0]);
for (i = 1; i < items; i++)
- put_unalig
[kernel.kallsyms] [k] veth_stats_one
changes/ideas suggested:
Using buffer in stack (Eric), Usage of memset (David), Using memcpy in
place of unaligned_put (Joe).
Signed-off-by: Raghavendra K T raghavendra...@linux.vnet.ibm.com
---
net/ipv6/addrconf.c | 22 +-
1 file changed, 13
[.] strings.FieldsFunc
cache-misses: 1.41 %
Please let me know if you have suggestions/comments.
Thanks Eric, Joe and David for comments on V1 and V2.
Raghavendra K T (2):
net: Introduce helper functions to get the per
Signed-off-by: Raghavendra K T raghavendra...@linux.vnet.ibm.com
---
include/net/ip.h | 10 ++
net/ipv4/af_inet.c | 41 +++--
2 files changed, 37 insertions(+), 14 deletions(-)
diff --git a/include/net/ip.h b/include/net/ip.h
index d5fe9f2..93bf12e
On 08/29/2015 10:41 AM, David Miller wrote:
From: Raghavendra K T raghavendra...@linux.vnet.ibm.com
Date: Sat, 29 Aug 2015 08:27:15 +0530
resending the patch with memset. Please let me know if you want to
resend all the patches.
Do not post patches as replies to existing discussion threads
On 08/29/2015 08:56 AM, Eric Dumazet wrote:
On Sat, 2015-08-29 at 08:27 +0530, Raghavendra K T wrote:
/* Use put_unaligned() because stats may not be aligned for u64. */
put_unaligned(items, stats[0]);
for (i = 1; i items; i++)
- put_unaligned
On 08/29/2015 08:51 PM, Joe Perches wrote:
On Sat, 2015-08-29 at 07:32 -0700, Eric Dumazet wrote:
On Sat, 2015-08-29 at 14:37 +0530, Raghavendra K T wrote:
static inline void __snmp6_fill_stats64(u64 *stats, void __percpu *mib,
- int items, int bytes
[kernel.kallsyms] [k] veth_stats_one
changes/ideas suggested:
Using buffer in stack (Eric), Usage of memset (David), Using memcpy in
place of unaligned_put (Joe).
Signed-off-by: Raghavendra K T raghavendra...@linux.vnet.ibm.com
---
net/ipv6/addrconf.c | 26 --
1 file changed, 16
let me know if you have suggestions/comments.
Thanks Eric, Joe and David for the comments.
Raghavendra K T (2):
net: Introduce helper functions to get the per cpu data
net: Optimize snmp stat aggregation by walking all the percpu data at
once
include/net/ip.h| 10 ++
net/ipv4
* David Miller [2015-08-28 11:24:13]:
> From: Raghavendra K T
> Date: Fri, 28 Aug 2015 12:09:52 +0530
>
> > On 08/28/2015 12:08 AM, David Miller wrote:
> >> From: Raghavendra K T
> >> Date: Wed, 26 Aug 2015 23:07:33 +0530
> >>
> &
On 08/28/2015 12:08 AM, David Miller wrote:
From: Raghavendra K T
Date: Wed, 26 Aug 2015 23:07:33 +0530
@@ -4641,10 +4647,12 @@ static inline void __snmp6_fill_stats64(u64 *stats,
void __percpu *mib,
static void snmp6_fill_stats(u64 *stats, struct inet6_dev *idev, int attrtype
On 08/28/2015 12:08 AM, David Miller wrote:
From: Raghavendra K T raghavendra...@linux.vnet.ibm.com
Date: Wed, 26 Aug 2015 23:07:33 +0530
@@ -4641,10 +4647,12 @@ static inline void __snmp6_fill_stats64(u64 *stats,
void __percpu *mib,
static void snmp6_fill_stats(u64 *stats, struct inet6_dev
* David Miller da...@davemloft.net [2015-08-28 11:24:13]:
From: Raghavendra K T raghavendra...@linux.vnet.ibm.com
Date: Fri, 28 Aug 2015 12:09:52 +0530
On 08/28/2015 12:08 AM, David Miller wrote:
From: Raghavendra K T raghavendra...@linux.vnet.ibm.com
Date: Wed, 26 Aug 2015 23:07:33
Signed-off-by: Raghavendra K T
---
include/net/ip.h | 10 ++
net/ipv4/af_inet.c | 41 +++--
2 files changed, 37 insertions(+), 14 deletions(-)
diff --git a/include/net/ip.h b/include/net/ip.h
index d5fe9f2..93bf12e 100644
--- a/include/net/ip.h
[kernel.kallsyms] [k] _raw_spin_lock
Signed-off-by: Raghavendra K T
---
net/ipv6/addrconf.c | 18 +-
1 file changed, 13 insertions(+), 5 deletions(-)
Change in V2:
- Allocate stat calculation buffer in stack (Eric)
Thanks David and Eric for coments on V1 and as both of them
you have suggestions/comments.
Thanks Eric and David for comments on V1.
Raghavendra K T (2):
net: Introduce helper functions to get the per cpu data
net: Optimize snmp stat aggregation by walking all the percpu data at
once
include/net/ip.h| 10 ++
net/ipv4/af_ine
On 08/26/2015 07:39 PM, Eric Dumazet wrote:
On Wed, 2015-08-26 at 15:55 +0530, Raghavendra K T wrote:
On 08/26/2015 04:37 AM, David Miller wrote:
From: Raghavendra K T
Date: Tue, 25 Aug 2015 13:24:24 +0530
Please let me know if you have suggestions/comments.
Like Eric Dumazet said
On 08/25/2015 09:30 PM, Eric Dumazet wrote:
On Tue, 2015-08-25 at 21:17 +0530, Raghavendra K T wrote:
On 08/25/2015 07:58 PM, Eric Dumazet wrote:
This is a great idea, but kcalloc()/kmalloc() can fail and you'll crash
the whole kernel at this point.
Good catch, and my bad. Though system
On 08/26/2015 04:37 AM, David Miller wrote:
From: Raghavendra K T
Date: Tue, 25 Aug 2015 13:24:24 +0530
Please let me know if you have suggestions/comments.
Like Eric Dumazet said the idea is good but needs some adjustments.
You might want to see whether a per-cpu work buffer works
/comments.
Thanks Eric and David for comments on V1.
Raghavendra K T (2):
net: Introduce helper functions to get the per cpu data
net: Optimize snmp stat aggregation by walking all the percpu data at
once
include/net/ip.h| 10 ++
net/ipv4/af_inet.c | 41
Signed-off-by: Raghavendra K T raghavendra...@linux.vnet.ibm.com
---
include/net/ip.h | 10 ++
net/ipv4/af_inet.c | 41 +++--
2 files changed, 37 insertions(+), 14 deletions(-)
diff --git a/include/net/ip.h b/include/net/ip.h
index d5fe9f2..93bf12e
1 - 100 of 1108 matches
Mail list logo