On Thu, Jan 29 2015, Tejun Heo wrote:
> Hello,
>
> Since the cgroup writeback patchset[1] have been posted, several
> people brought up concerns about the complexity of allowing an inode
> to be dirtied against multiple cgroups is necessary for the purpose of
> writeback and it is true that a
On Thu, Jan 29 2015, Tejun Heo wrote:
Hello,
Since the cgroup writeback patchset[1] have been posted, several
people brought up concerns about the complexity of allowing an inode
to be dirtied against multiple cgroups is necessary for the purpose of
writeback and it is true that a
On Thu, Jan 08 2015, Johannes Weiner wrote:
> Introduce the basic control files to account, partition, and limit
> memory using cgroups in default hierarchy mode.
>
> This interface versioning allows us to address fundamental design
> issues in the existing memory cgroup interface, further
On Thu, Jan 08 2015, Johannes Weiner wrote:
Introduce the basic control files to account, partition, and limit
memory using cgroups in default hierarchy mode.
This interface versioning allows us to address fundamental design
issues in the existing memory cgroup interface, further explained
cnt 240649
Fixes: e61734c55c24 ("cgroup: remove cgroup->name")
Signed-off-by: Greg Thelen
---
mm/memcontrol.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 851924fa5170..683b4782019b 100644
--- a/mm/memcontrol.c
+++
Use BUILD_BUG_ON() to compile assert that memcg string tables are in
sync with corresponding enums. There aren't currently any issues with
these tables. This is just defensive.
Signed-off-by: Greg Thelen
---
mm/memcontrol.c | 4
1 file changed, 4 insertions(+)
diff --git a/mm
cgroup-name)
Signed-off-by: Greg Thelen gthe...@google.com
---
mm/memcontrol.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 851924fa5170..683b4782019b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1477,9 +1477,9 @@ void
Use BUILD_BUG_ON() to compile assert that memcg string tables are in
sync with corresponding enums. There aren't currently any issues with
these tables. This is just defensive.
Signed-off-by: Greg Thelen gthe...@google.com
---
mm/memcontrol.c | 4
1 file changed, 4 insertions(+)
diff
On Mon, Nov 17 2014, Greg Thelen wrote:
[...]
> Given that bss and brk are nobits (i.e. only ALLOC) sections, does
> file_offset make sense as a load address. This fails with gold:
>
> $ git checkout v3.18-rc5
> $ make # with gold
> [...]
> ..bss and .brk lack commo
On Fri, Oct 31 2014, Junjie Mao wrote:
> When choosing a random address, the current implementation does not take into
> account the reversed space for .bss and .brk sections. Thus the relocated
> kernel
> may overlap other components in memory. Here is an example of the overlap
> from a
>
On Fri, Oct 31 2014, Junjie Mao wrote:
When choosing a random address, the current implementation does not take into
account the reversed space for .bss and .brk sections. Thus the relocated
kernel
may overlap other components in memory. Here is an example of the overlap
from a
x86_64
On Mon, Nov 17 2014, Greg Thelen wrote:
[...]
Given that bss and brk are nobits (i.e. only ALLOC) sections, does
file_offset make sense as a load address. This fails with gold:
$ git checkout v3.18-rc5
$ make # with gold
[...]
..bss and .brk lack common file offset
..bss and .brk lack
On Tue, Sep 23 2014, Johannes Weiner wrote:
> On Mon, Sep 22, 2014 at 10:52:50PM -0700, Greg Thelen wrote:
>>
>> On Fri, Sep 19 2014, Johannes Weiner wrote:
>>
>> > In a memcg with even just moderate cache pressure, success rates for
>> > transparent huge
On Fri, Sep 19 2014, Johannes Weiner wrote:
> In a memcg with even just moderate cache pressure, success rates for
> transparent huge page allocations drop to zero, wasting a lot of
> effort that the allocator puts into assembling these pages.
>
> The reason for this is that the memcg reclaim
On Fri, Sep 19 2014, Johannes Weiner wrote:
In a memcg with even just moderate cache pressure, success rates for
transparent huge page allocations drop to zero, wasting a lot of
effort that the allocator puts into assembling these pages.
The reason for this is that the memcg reclaim code
On Tue, Sep 23 2014, Johannes Weiner wrote:
On Mon, Sep 22, 2014 at 10:52:50PM -0700, Greg Thelen wrote:
On Fri, Sep 19 2014, Johannes Weiner wrote:
In a memcg with even just moderate cache pressure, success rates for
transparent huge page allocations drop to zero, wasting a lot
On Tue, Sep 16 2014, Vladimir Davydov wrote:
> Hi Suleiman,
>
> On Mon, Sep 15, 2014 at 12:13:33PM -0700, Suleiman Souhlal wrote:
>> On Mon, Sep 15, 2014 at 3:44 AM, Vladimir Davydov
>> wrote:
>> > Hi,
>> >
>> > I'd like to discuss downsides of the kmem accounting part of the memory
>> > cgroup
On Tue, Sep 16 2014, Vladimir Davydov wrote:
Hi Suleiman,
On Mon, Sep 15, 2014 at 12:13:33PM -0700, Suleiman Souhlal wrote:
On Mon, Sep 15, 2014 at 3:44 AM, Vladimir Davydov
vdavy...@parallels.com wrote:
Hi,
I'd like to discuss downsides of the kmem accounting part of the memory
On Thu, Aug 07 2014, Johannes Weiner wrote:
> On Thu, Aug 07, 2014 at 03:08:22PM +0200, Michal Hocko wrote:
>> On Mon 04-08-14 17:14:54, Johannes Weiner wrote:
>> > Instead of passing the request size to direct reclaim, memcg just
>> > manually loops around reclaiming SWAP_CLUSTER_MAX pages
On Thu, Aug 07 2014, Johannes Weiner wrote:
On Thu, Aug 07, 2014 at 03:08:22PM +0200, Michal Hocko wrote:
On Mon 04-08-14 17:14:54, Johannes Weiner wrote:
Instead of passing the request size to direct reclaim, memcg just
manually loops around reclaiming SWAP_CLUSTER_MAX pages until the
tical over aggressive shrinking of dm bufio objects.
If the uninitialized dm_bufio_client.shrinker.flags contains
SHRINKER_NUMA_AWARE then shrink_slab() would call the dm shrinker for
each numa node rather than just once. This has been broken since 3.12.
Signed-off-by: Greg Thelen
---
drivers/md/
aggressive shrinking of dm bufio objects.
If the uninitialized dm_bufio_client.shrinker.flags contains
SHRINKER_NUMA_AWARE then shrink_slab() would call the dm shrinker for
each numa node rather than just once. This has been broken since 3.12.
Signed-off-by: Greg Thelen gthe...@google.com
On Wed, Jul 9, 2014 at 9:36 AM, Vladimir Davydov wrote:
> Hi Tim,
>
> On Wed, Jul 09, 2014 at 08:08:07AM -0700, Tim Hockin wrote:
>> How is this different from RLIMIT_AS? You specifically mentioned it
>> earlier but you don't explain how this is different.
>
> The main difference is that
On Wed, Jul 9, 2014 at 9:36 AM, Vladimir Davydov vdavy...@parallels.com wrote:
Hi Tim,
On Wed, Jul 09, 2014 at 08:08:07AM -0700, Tim Hockin wrote:
How is this different from RLIMIT_AS? You specifically mentioned it
earlier but you don't explain how this is different.
The main difference is
6b208e3f6e35 ("mm: memcg: remove unused node/section info from
pc->flags") deleted the lookup_cgroup_page() function but left a
prototype for it.
Kill the vestigial prototype.
Signed-off-by: Greg Thelen
---
include/linux/page_cgroup.h | 1 -
1 file changed, 1 deletion(-)
diff --
6b208e3f6e35 (mm: memcg: remove unused node/section info from
pc-flags) deleted the lookup_cgroup_page() function but left a
prototype for it.
Kill the vestigial prototype.
Signed-off-by: Greg Thelen gthe...@google.com
---
include/linux/page_cgroup.h | 1 -
1 file changed, 1 deletion(-)
diff
On Tue, Jun 10 2014, Johannes Weiner wrote:
> On Mon, Jun 09, 2014 at 03:52:51PM -0700, Greg Thelen wrote:
>>
>> On Fri, Jun 06 2014, Michal Hocko wrote:
>>
>> > Some users (e.g. Google) would like to have stronger semantic than low
>> > li
On Tue, Jun 10 2014, Johannes Weiner han...@cmpxchg.org wrote:
On Mon, Jun 09, 2014 at 03:52:51PM -0700, Greg Thelen wrote:
On Fri, Jun 06 2014, Michal Hocko mho...@suse.cz wrote:
Some users (e.g. Google) would like to have stronger semantic than low
limit offers currently
On Fri, Jun 06 2014, Michal Hocko wrote:
> Some users (e.g. Google) would like to have stronger semantic than low
> limit offers currently. The fallback mode is not desirable and they
> prefer hitting OOM killer rather than ignoring low limit for protected
> groups. There are other possible
On Fri, Jun 06 2014, Michal Hocko mho...@suse.cz wrote:
Some users (e.g. Google) would like to have stronger semantic than low
limit offers currently. The fallback mode is not desirable and they
prefer hitting OOM killer rather than ignoring low limit for protected
groups. There are other
On Wed, May 28 2014, Johannes Weiner wrote:
> On Wed, May 28, 2014 at 04:21:44PM +0200, Michal Hocko wrote:
>> On Wed 28-05-14 09:49:05, Johannes Weiner wrote:
>> > On Wed, May 28, 2014 at 02:10:23PM +0200, Michal Hocko wrote:
>> > > Hi Andrew, Johannes,
>> > >
>> > > On Mon 28-04-14 14:26:41,
On Wed, May 28 2014, Johannes Weiner han...@cmpxchg.org wrote:
On Wed, May 28, 2014 at 04:21:44PM +0200, Michal Hocko wrote:
On Wed 28-05-14 09:49:05, Johannes Weiner wrote:
On Wed, May 28, 2014 at 02:10:23PM +0200, Michal Hocko wrote:
Hi Andrew, Johannes,
On Mon 28-04-14 14:26:41,
On Tue, May 13 2014, Michal Hocko wrote:
> force_empty has been introduced primarily to drop memory before it gets
> reparented on the group removal. This alone doesn't sound fully
> justified because reparented pages which are not in use can be reclaimed
> also later when there is a memory
On Tue, May 13 2014, Michal Hocko mho...@suse.cz wrote:
force_empty has been introduced primarily to drop memory before it gets
reparented on the group removal. This alone doesn't sound fully
justified because reparented pages which are not in use can be reclaimed
also later when there is a
gt;> migration fails. This prevents unnecessary work done by the freeing scanner
>> but
>> also encourages memory to be as compacted as possible at the end of the zone.
>>
>> Reported-by: Greg Thelen
>
> What did Greg actually report? IOW, what if any ob
to be as compacted as possible at the end of the zone.
Reported-by: Greg Thelen gthe...@google.com
What did Greg actually report? IOW, what if any observable problem is
being fixed here?
I detected the problem at runtime seeing that ext4 metadata pages (esp
the ones read by sbi-s_group_desc
On Mon, Apr 28 2014, Roman Gushchin wrote:
> 28.04.2014, 16:27, "Michal Hocko" :
>> The series is based on top of the current mmotm tree. Once the series
>> gets accepted I will post a patch which will mark the soft limit as
>> deprecated with a note that it will be eventually dropped. Let me
On Mon, Apr 28 2014, Roman Gushchin kl...@yandex-team.ru wrote:
28.04.2014, 16:27, Michal Hocko mho...@suse.cz:
The series is based on top of the current mmotm tree. Once the series
gets accepted I will post a patch which will mark the soft limit as
deprecated with a note that it will be
-by: Vladimir Davydov
One comment nit below, otherwise looks good to me.
Acked-by: Greg Thelen
> Cc: Johannes Weiner
> Cc: Michal Hocko
> Cc: Glauber Costa
> Cc: Christoph Lameter
> Cc: Pekka Enberg
> ---
> Changes in v2.1:
> - add missing kmalloc_order forward declarati
One comment nit below, otherwise looks good to me.
Acked-by: Greg Thelen gthe...@google.com
Cc: Johannes Weiner han...@cmpxchg.org
Cc: Michal Hocko mho...@suse.cz
Cc: Glauber Costa glom...@gmail.com
Cc: Christoph Lameter c...@linux-foundation.org
Cc: Pekka Enberg penb...@kernel.org
gt;>>> Surething. Why not. :)
>>>
>>> *sigh* actually, the plot thickens a bit with SHMALL (total size of shm
>>> segments system wide, in pages). Currently by default:
>>>
>>> #define SHMALL (SHMMAX/getpagesize()*(SHMMNI/16))
>>>
>>> Thi
any possibility of misaccounting an allocation
> going from one memcg's cache to another memcg, because now we always
> charge slabs against the memcg the cache belongs to. That's why this
> patch removes the big comment to memcg_kmem_get_cache.
>
> Signed-off-by: Vladimir Davydov
Acked-b
On Tue, Apr 01 2014, Davidlohr Bueso wrote:
> On Tue, 2014-04-01 at 19:56 -0400, KOSAKI Motohiro wrote:
>> >> > Ah-hah, that's interesting info.
>> >> >
>> >> > Let's make the default 64GB?
>> >>
>> >> 64GB is infinity at that time, but it no longer near infinity today. I
>> >> like
>> >> very
On Tue, Apr 01 2014, Vladimir Davydov wrote:
> Currently to allocate a page that should be charged to kmemcg (e.g.
> threadinfo), we pass __GFP_KMEMCG flag to the page allocator. The page
> allocated is then to be freed by free_memcg_kmem_pages. Apart from
> looking asymmetrical, this also
On Tue, Apr 01 2014, Vladimir Davydov vdavy...@parallels.com wrote:
Currently to allocate a page that should be charged to kmemcg (e.g.
threadinfo), we pass __GFP_KMEMCG flag to the page allocator. The page
allocated is then to be freed by free_memcg_kmem_pages. Apart from
looking
On Tue, Apr 01 2014, Davidlohr Bueso davidl...@hp.com wrote:
On Tue, 2014-04-01 at 19:56 -0400, KOSAKI Motohiro wrote:
Ah-hah, that's interesting info.
Let's make the default 64GB?
64GB is infinity at that time, but it no longer near infinity today. I
like
very large or total
of misaccounting an allocation
going from one memcg's cache to another memcg, because now we always
charge slabs against the memcg the cache belongs to. That's why this
patch removes the big comment to memcg_kmem_get_cache.
Signed-off-by: Vladimir Davydov vdavy...@parallels.com
Acked-by: Greg Thelen gthe
the default value, users can potentially DoS the
system, or at least cause excessive swapping if not manually set, but
then again the same goes for anon mem... so do we care?
(2014/04/02 10:08), Greg Thelen wrote:
At least when there's an egregious anon leak the oom killer has the
power
On Thu, Mar 27, 2014 at 12:37 AM, Vladimir Davydov
wrote:
> Hi Greg,
>
> On 03/27/2014 08:31 AM, Greg Thelen wrote:
>> On Wed, Mar 26 2014, Vladimir Davydov wrote:
>>
>>> We don't track any random page allocation, so we shouldn't track kmalloc
>>>
On Thu, Mar 27, 2014 at 12:37 AM, Vladimir Davydov
vdavy...@parallels.com wrote:
Hi Greg,
On 03/27/2014 08:31 AM, Greg Thelen wrote:
On Wed, Mar 26 2014, Vladimir Davydov vdavy...@parallels.com wrote:
We don't track any random page allocation, so we shouldn't track kmalloc
that falls back
On Wed, Mar 26 2014, Vladimir Davydov wrote:
> We don't track any random page allocation, so we shouldn't track kmalloc
> that falls back to the page allocator.
This seems like a change which will leads to confusing (and arguably
improper) kernel behavior. I prefer the behavior prior to this
On Wed, Mar 26 2014, Vladimir Davydov vdavy...@parallels.com wrote:
We don't track any random page allocation, so we shouldn't track kmalloc
that falls back to the page allocator.
This seems like a change which will leads to confusing (and arguably
improper) kernel behavior. I prefer the
On Mon, Feb 03 2014, Michal Hocko wrote:
> On Thu 30-01-14 16:28:27, Greg Thelen wrote:
>> On Thu, Jan 30 2014, Michal Hocko wrote:
>>
>> > On Wed 29-01-14 11:08:46, Greg Thelen wrote:
>> > [...]
>> >> The series looks useful. We (Google) have be
On Mon, Feb 03 2014, Michal Hocko wrote:
On Thu 30-01-14 16:28:27, Greg Thelen wrote:
On Thu, Jan 30 2014, Michal Hocko wrote:
On Wed 29-01-14 11:08:46, Greg Thelen wrote:
[...]
The series looks useful. We (Google) have been using something similar.
In practice such a low_limit
On Thu, Jan 30 2014, Michal Hocko wrote:
> On Wed 29-01-14 11:08:46, Greg Thelen wrote:
> [...]
>> The series looks useful. We (Google) have been using something similar.
>> In practice such a low_limit (or memory guarantee), doesn't nest very
>> well.
>>
>>
On Thu, Jan 30 2014, Michal Hocko wrote:
On Wed 29-01-14 11:08:46, Greg Thelen wrote:
[...]
The series looks useful. We (Google) have been using something similar.
In practice such a low_limit (or memory guarantee), doesn't nest very
well.
Example:
- parent_memcg: limit 500, low_limit
On Wed, Dec 11 2013, Michal Hocko wrote:
> Hi,
> previous discussions have shown that soft limits cannot be reformed
> (http://lwn.net/Articles/555249/). This series introduces an alternative
> approach to protecting memory allocated to processes executing within
> a memory cgroup controller. It
On Wed, Dec 11 2013, Michal Hocko wrote:
Hi,
previous discussions have shown that soft limits cannot be reformed
(http://lwn.net/Articles/555249/). This series introduces an alternative
approach to protecting memory allocated to processes executing within
a memory cgroup controller. It is
ed out by re-introducing the old test within
> the racy critical sections.
>
> This patch introduces ipc_valid_object() to consolidate the way we cope with
> IPC_RMID races by using the same abstraction across the API implementation.
>
> Signed-off-by: Rafael Aquini
Acked-by: Greg
-introducing the old test within
the racy critical sections.
This patch introduces ipc_valid_object() to consolidate the way we cope with
IPC_RMID races by using the same abstraction across the API implementation.
Signed-off-by: Rafael Aquini aqu...@redhat.com
Acked-by: Greg Thelen gthe
82a51 ("ipc,shm: shorten critical region for shmctl")
Signed-off-by: Greg Thelen
Cc: # 3.10.17+ 3.11.6+
---
ipc/shm.c | 28 +++-
1 file changed, 23 insertions(+), 5 deletions(-)
diff --git a/ipc/shm.c b/ipc/shm.c
index d69739610fd4..0bdf21c6814e 100644
for shmctl)
Signed-off-by: Greg Thelen gthe...@google.com
Cc: sta...@vger.kernel.org # 3.10.17+ 3.11.6+
---
ipc/shm.c | 28 +++-
1 file changed, 23 insertions(+), 5 deletions(-)
diff --git a/ipc/shm.c b/ipc/shm.c
index d69739610fd4..0bdf21c6814e 100644
--- a/ipc/shm.c
+++ b/ipc
On Mon, Nov 04 2013, Andrew Morton wrote:
> On Sun, 27 Oct 2013 10:30:15 -0700 Greg Thelen wrote:
>
>> Tests various percpu operations.
>
> Could you please take a look at the 32-bit build (this is i386):
>
> lib/percpu_test.c: In function 'percpu_test_init':
> li
On Mon, Nov 04 2013, Andrew Morton wrote:
On Sun, 27 Oct 2013 10:30:15 -0700 Greg Thelen gthe...@google.com wrote:
Tests various percpu operations.
Could you please take a look at the 32-bit build (this is i386):
lib/percpu_test.c: In function 'percpu_test_init':
lib/percpu_test.c:61
fix is to subtract the unsigned page count rather than adding its
negation. This only works once "percpu: fix this_cpu_sub() subtrahend
casting for unsigneds" is applied to fix this_cpu_sub().
Signed-off-by: Greg Thelen
Acked-by: Tejun Heo
---
mm/memcontrol.c | 2 +-
1 file changed,
Tests various percpu operations.
Enable with CONFIG_PERCPU_TEST=m.
Signed-off-by: Greg Thelen
Acked-by: Tejun Heo
---
lib/Kconfig.debug | 9
lib/Makefile | 2 +
lib/percpu_test.c | 138 ++
3 files changed, 149 insertions
);
Signed-off-by: Greg Thelen
Acked-by: Tejun Heo
---
arch/x86/include/asm/percpu.h | 3 ++-
include/linux/percpu.h| 8
2 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index 0da5200..b3e18f8 100644
--- a/arch
est module description now
referring to per cpu operations rather than per cpu counters.
- move small test code update from patch 2 to patch 1 (where the test is
introduced).
Greg Thelen (3):
percpu: add test module for various percpu operations
percpu: fix this_cpu_sub() subtrahend casting for
On Sun, Oct 27 2013, Greg Thelen wrote:
> this_cpu_sub() is implemented as negation and addition.
>
> This patch casts the adjustment to the counter type before negation to
> sign extend the adjustment. This helps in cases where the counter
> type is wider than an unsi
On Sun, Oct 27 2013, Tejun Heo wrote:
> On Sun, Oct 27, 2013 at 05:04:29AM -0700, Andrew Morton wrote:
>> On Sun, 27 Oct 2013 07:22:55 -0400 Tejun Heo wrote:
>>
>> > We probably want to cc stable for this and the next one. How should
>> > these be routed? I can take these through percpu tree
fix is to subtract the unsigned page count rather than adding its
negation. This only works with the "percpu counter: cast
this_cpu_sub() adjustment" patch which fixes this_cpu_sub().
Signed-off-by: Greg Thelen
---
mm/memcontrol.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
ter, -(int)nr_pages)
admitting that __this_cpu_add/sub() doesn't work with unsigned adjustments. But
I felt like fixing the core services to prevent this in the future.
Greg Thelen (3):
percpu counter: test module
percpu counter: cast this_cpu_sub() adjustment
memcg: use __this_cpu_sub to decre
);
Signed-off-by: Greg Thelen
---
arch/x86/include/asm/percpu.h | 3 ++-
include/linux/percpu.h| 8
lib/percpu_test.c | 2 +-
3 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index 0da5200
Tests various percpu operations.
Enable with CONFIG_PERCPU_TEST=m.
Signed-off-by: Greg Thelen
---
lib/Kconfig.debug | 9
lib/Makefile | 2 +
lib/percpu_test.c | 138 ++
3 files changed, 149 insertions(+)
create mode 100644 lib
);
Signed-off-by: Greg Thelen gthe...@google.com
---
arch/x86/include/asm/percpu.h | 3 ++-
include/linux/percpu.h| 8
lib/percpu_test.c | 2 +-
3 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
Tests various percpu operations.
Enable with CONFIG_PERCPU_TEST=m.
Signed-off-by: Greg Thelen gthe...@google.com
---
lib/Kconfig.debug | 9
lib/Makefile | 2 +
lib/percpu_test.c | 138 ++
3 files changed, 149 insertions
)
admitting that __this_cpu_add/sub() doesn't work with unsigned adjustments. But
I felt like fixing the core services to prevent this in the future.
Greg Thelen (3):
percpu counter: test module
percpu counter: cast this_cpu_sub() adjustment
memcg: use __this_cpu_sub to decrement stats
arch
than adding its
negation. This only works with the percpu counter: cast
this_cpu_sub() adjustment patch which fixes this_cpu_sub().
Signed-off-by: Greg Thelen gthe...@google.com
---
mm/memcontrol.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/memcontrol.c b/mm
On Sun, Oct 27 2013, Tejun Heo wrote:
On Sun, Oct 27, 2013 at 05:04:29AM -0700, Andrew Morton wrote:
On Sun, 27 Oct 2013 07:22:55 -0400 Tejun Heo t...@kernel.org wrote:
We probably want to cc stable for this and the next one. How should
these be routed? I can take these through percpu
On Sun, Oct 27 2013, Greg Thelen wrote:
this_cpu_sub() is implemented as negation and addition.
This patch casts the adjustment to the counter type before negation to
sign extend the adjustment. This helps in cases where the counter
type is wider than an unsigned adjustment. An alternative
now
referring to per cpu operations rather than per cpu counters.
- move small test code update from patch 2 to patch 1 (where the test is
introduced).
Greg Thelen (3):
percpu: add test module for various percpu operations
percpu: fix this_cpu_sub() subtrahend casting for unsigneds
memcg
);
Signed-off-by: Greg Thelen gthe...@google.com
Acked-by: Tejun Heo t...@kernel.org
---
arch/x86/include/asm/percpu.h | 3 ++-
include/linux/percpu.h| 8
2 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index
Tests various percpu operations.
Enable with CONFIG_PERCPU_TEST=m.
Signed-off-by: Greg Thelen gthe...@google.com
Acked-by: Tejun Heo t...@kernel.org
---
lib/Kconfig.debug | 9
lib/Makefile | 2 +
lib/percpu_test.c | 138 ++
3
than adding its
negation. This only works once percpu: fix this_cpu_sub() subtrahend
casting for unsigneds is applied to fix this_cpu_sub().
Signed-off-by: Greg Thelen gthe...@google.com
Acked-by: Tejun Heo t...@kernel.org
---
mm/memcontrol.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion
a6 00 00 00 48 8b b2 30 02 00 00 45 89 ca <4c> 39 56 18 0f 8c 36
>> 01 00 00 44 89 c9
>> f7 d9 89 cf 65 48 01 7e
>> [ 7691.528638] RIP [] mem_cgroup_move_account+0xf4/0x290
>>
>> Add the required __this_cpu_read().
>
> Sorry for my mistake and thanks for the fix up,
---
From c1f43ef0f4cc42fb2ecaeaca71bd247365e3521e Mon Sep 17 00:00:00 2001
From: Greg Thelen gthe...@google.com
Date: Fri, 25 Oct 2013 21:59:57 -0700
Subject: [PATCH] memcg: remove incorrect underflow check
When a memcg is deleted mem_cgroup_reparent_charges() moves charged
memory to the parent memcg
0 N3=0
hierarchical_total=908 N0=552 N1=317 N2=39 N3=0
hierarchical_file=850 N0=549 N1=301 N2=0 N3=0
hierarchical_anon=58 N0=3 N1=16 N2=39 N3=0
hierarchical_unevictable=0 N0=0 N1=0 N2=0 N3=0
Signed-off-by: Ying Han
Signed-off-by: Greg Thelen
---
Changelog since v3:
- push 'iter' local variabl
11,736,855 b31717 vmlinux.after
Signed-off-by: Greg Thelen
Signed-off-by: Ying Han
---
Changelog since v3:
- Use ARRAY_SIZE(stats) rather than array terminator.
- rebased to latest linus/master (d8efd82) to incorporate 182446d08 "cgroup:
pass around cgroup_subsys_state instead of cgroup in file me
11,736,855 b31717 vmlinux.after
Signed-off-by: Greg Thelen gthe...@google.com
Signed-off-by: Ying Han ying...@google.com
---
Changelog since v3:
- Use ARRAY_SIZE(stats) rather than array terminator.
- rebased to latest linus/master (d8efd82) to incorporate 182446d08 cgroup:
pass around
=908 N0=552 N1=317 N2=39 N3=0
hierarchical_file=850 N0=549 N1=301 N2=0 N3=0
hierarchical_anon=58 N0=3 N1=16 N2=39 N3=0
hierarchical_unevictable=0 N0=0 N1=0 N2=0 N3=0
Signed-off-by: Ying Han ying...@google.com
Signed-off-by: Greg Thelen gthe...@google.com
---
Changelog since v3:
- push 'iter' local
N3=0
hierarchical_total=73 N0=0 N1=41 N2=32 N3=0
hierarchical_file=14 N0=0 N1=0 N2=14 N3=0
hierarchical_anon=59 N0=0 N1=41 N2=18 N3=0
hierarchical_unevictable=0 N0=0 N1=0 N2=0 N3=0
Signed-off-by: Ying Han
Signed-off-by: Greg Thelen
---
Changelog since v2:
- reworded Documentation/cgroup/memory.txt
- u
11,627,372 b16b6c vmlinux.after
Signed-off-by: Greg Thelen
Signed-off-by: Ying Han
---
Changelog since v2:
- rebased to v3.11
- updated commit description
mm/memcontrol.c | 57 +++--
1 file changed, 23 insertions(+), 34 deletions(-)
diff --git a/mm
11,627,372 b16b6c vmlinux.after
Signed-off-by: Greg Thelen gthe...@google.com
Signed-off-by: Ying Han ying...@google.com
---
Changelog since v2:
- rebased to v3.11
- updated commit description
mm/memcontrol.c | 57 +++--
1 file changed, 23 insertions
hierarchical_file=14 N0=0 N1=0 N2=14 N3=0
hierarchical_anon=59 N0=0 N1=41 N2=18 N3=0
hierarchical_unevictable=0 N0=0 N1=0 N2=0 N3=0
Signed-off-by: Ying Han ying...@google.com
Signed-off-by: Greg Thelen gthe...@google.com
---
Changelog since v2:
- reworded Documentation/cgroup/memory.txt
- updated
ntroduction of
memcg threshold notifications in v2.6.34-rc1-116-g2e72b6347c94 "memcg:
implement memory thresholds"
Signed-off-by: Greg Thelen
---
mm/memcontrol.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 08
threshold notifications in v2.6.34-rc1-116-g2e72b6347c94 memcg:
implement memory thresholds
Signed-off-by: Greg Thelen gthe...@google.com
---
mm/memcontrol.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 0878ff7..aa44621 100644
On Wed, May 08 2013, Seth Jennings wrote:
> debugfs currently lack the ability to create attributes
> that set/get atomic_t values.
>
> This patch adds support for this through a new
> debugfs_create_atomic_t() function.
>
> Signed-off-by: Seth Jennings
> Acked-by: Greg Kroah-Hartman
>
On Wed, May 08 2013, Seth Jennings wrote:
debugfs currently lack the ability to create attributes
that set/get atomic_t values.
This patch adds support for this through a new
debugfs_create_atomic_t() function.
Signed-off-by: Seth Jennings sjenn...@linux.vnet.ibm.com
Acked-by: Greg
On Wed, Apr 10 2013, Andrew Morton wrote:
> On Tue, 09 Apr 2013 17:37:20 -0700 Greg Thelen wrote:
>
>> > Call cond_resched() in shrink_dcache_parent() to maintain
>> > interactivity.
>> >
>> > Before this patch:
>> >
>> > void shrin
On Wed, Apr 10 2013, Andrew Morton wrote:
On Tue, 09 Apr 2013 17:37:20 -0700 Greg Thelen gthe...@google.com wrote:
Call cond_resched() in shrink_dcache_parent() to maintain
interactivity.
Before this patch:
void shrink_dcache_parent(struct dentry * parent)
{
while ((found
201 - 300 of 390 matches
Mail list logo