On Wed, Feb 11, 2015 at 12:33 PM, Tejun Heo wrote:
[...]
>> page count to throttle based on blkcg's bandwidth. Note: memcg
>> doesn't yet have dirty page counts, but several of us have made
>> attempts at adding the counters. And it shouldn't be hard to get them
>> merged.
>
> Can you please
Hello,
On Thu, Feb 12, 2015 at 02:15:29AM +0400, Konstantin Khlebnikov wrote:
> Well, ok. Even if shared writes are rare whey should be handled somehow
> without relying on kupdate-like writeback. If memcg has a lot of dirty pages
This only works iff we consider those cases to be marginal enough
On Thu, Feb 12, 2015 at 1:05 AM, Tejun Heo wrote:
> On Thu, Feb 12, 2015 at 01:57:04AM +0400, Konstantin Khlebnikov wrote:
>> On Thu, Feb 12, 2015 at 12:46 AM, Tejun Heo wrote:
>> > Hello,
>> >
>> > On Thu, Feb 12, 2015 at 12:22:34AM +0300, Konstantin Khlebnikov wrote:
>> >> > Yeah, available
On Thu, Feb 12, 2015 at 01:57:04AM +0400, Konstantin Khlebnikov wrote:
> On Thu, Feb 12, 2015 at 12:46 AM, Tejun Heo wrote:
> > Hello,
> >
> > On Thu, Feb 12, 2015 at 12:22:34AM +0300, Konstantin Khlebnikov wrote:
> >> > Yeah, available memory to the matching memcg and the number of dirty
> >> >
On Thu, Feb 12, 2015 at 12:46 AM, Tejun Heo wrote:
> Hello,
>
> On Thu, Feb 12, 2015 at 12:22:34AM +0300, Konstantin Khlebnikov wrote:
>> > Yeah, available memory to the matching memcg and the number of dirty
>> > pages in it. It's gonna work the same way as the global case just
>> > scoped to
Hello,
On Thu, Feb 12, 2015 at 12:22:34AM +0300, Konstantin Khlebnikov wrote:
> > Yeah, available memory to the matching memcg and the number of dirty
> > pages in it. It's gonna work the same way as the global case just
> > scoped to the cgroup.
>
> That might be a problem: all dirty pages
On Wed, Feb 11, 2015 at 11:33 PM, Tejun Heo wrote:
> Hello, Greg.
>
> On Wed, Feb 11, 2015 at 10:28:44AM -0800, Greg Thelen wrote:
>> This seems good. I assume that blkcg writeback would query
>> corresponding memcg for dirty page count to determine if over
>> background limit. And
Hello, Greg.
On Wed, Feb 11, 2015 at 10:28:44AM -0800, Greg Thelen wrote:
> This seems good. I assume that blkcg writeback would query
> corresponding memcg for dirty page count to determine if over
> background limit. And balance_dirty_pages() would query memcg's dirty
Yeah, available memory
On Tue, Feb 10, 2015 at 6:19 PM, Tejun Heo wrote:
> Hello, again.
>
> On Sat, Feb 07, 2015 at 09:38:39AM -0500, Tejun Heo wrote:
>> If we can argue that memcg and blkcg having different views is
>> meaningful and characterize and justify the behaviors stemming from
>> the deviation, sure, that'd
On Tue, Feb 10, 2015 at 6:19 PM, Tejun Heo t...@kernel.org wrote:
Hello, again.
On Sat, Feb 07, 2015 at 09:38:39AM -0500, Tejun Heo wrote:
If we can argue that memcg and blkcg having different views is
meaningful and characterize and justify the behaviors stemming from
the deviation, sure,
Hello, Greg.
On Wed, Feb 11, 2015 at 10:28:44AM -0800, Greg Thelen wrote:
This seems good. I assume that blkcg writeback would query
corresponding memcg for dirty page count to determine if over
background limit. And balance_dirty_pages() would query memcg's dirty
Yeah, available memory to
On Wed, Feb 11, 2015 at 11:33 PM, Tejun Heo t...@kernel.org wrote:
Hello, Greg.
On Wed, Feb 11, 2015 at 10:28:44AM -0800, Greg Thelen wrote:
This seems good. I assume that blkcg writeback would query
corresponding memcg for dirty page count to determine if over
background limit. And
Hello,
On Thu, Feb 12, 2015 at 12:22:34AM +0300, Konstantin Khlebnikov wrote:
Yeah, available memory to the matching memcg and the number of dirty
pages in it. It's gonna work the same way as the global case just
scoped to the cgroup.
That might be a problem: all dirty pages accounted
Hello,
On Thu, Feb 12, 2015 at 02:15:29AM +0400, Konstantin Khlebnikov wrote:
Well, ok. Even if shared writes are rare whey should be handled somehow
without relying on kupdate-like writeback. If memcg has a lot of dirty pages
This only works iff we consider those cases to be marginal enough
On Thu, Feb 12, 2015 at 12:46 AM, Tejun Heo t...@kernel.org wrote:
Hello,
On Thu, Feb 12, 2015 at 12:22:34AM +0300, Konstantin Khlebnikov wrote:
Yeah, available memory to the matching memcg and the number of dirty
pages in it. It's gonna work the same way as the global case just
scoped
On Thu, Feb 12, 2015 at 01:57:04AM +0400, Konstantin Khlebnikov wrote:
On Thu, Feb 12, 2015 at 12:46 AM, Tejun Heo t...@kernel.org wrote:
Hello,
On Thu, Feb 12, 2015 at 12:22:34AM +0300, Konstantin Khlebnikov wrote:
Yeah, available memory to the matching memcg and the number of dirty
On Thu, Feb 12, 2015 at 1:05 AM, Tejun Heo t...@kernel.org wrote:
On Thu, Feb 12, 2015 at 01:57:04AM +0400, Konstantin Khlebnikov wrote:
On Thu, Feb 12, 2015 at 12:46 AM, Tejun Heo t...@kernel.org wrote:
Hello,
On Thu, Feb 12, 2015 at 12:22:34AM +0300, Konstantin Khlebnikov wrote:
Yeah,
On Wed, Feb 11, 2015 at 12:33 PM, Tejun Heo t...@kernel.org wrote:
[...]
page count to throttle based on blkcg's bandwidth. Note: memcg
doesn't yet have dirty page counts, but several of us have made
attempts at adding the counters. And it shouldn't be hard to get them
merged.
Can you
Hello Tejun,
On Tue 10-02-15 21:19:06, Tejun Heo wrote:
> On Sat, Feb 07, 2015 at 09:38:39AM -0500, Tejun Heo wrote:
> > If we can argue that memcg and blkcg having different views is
> > meaningful and characterize and justify the behaviors stemming from
> > the deviation, sure, that'd be
Hello, again.
On Sat, Feb 07, 2015 at 09:38:39AM -0500, Tejun Heo wrote:
> If we can argue that memcg and blkcg having different views is
> meaningful and characterize and justify the behaviors stemming from
> the deviation, sure, that'd be fine, but I don't think we have that as
> of now.
If we
Hello, again.
On Sat, Feb 07, 2015 at 09:38:39AM -0500, Tejun Heo wrote:
If we can argue that memcg and blkcg having different views is
meaningful and characterize and justify the behaviors stemming from
the deviation, sure, that'd be fine, but I don't think we have that as
of now.
If we
Hello Tejun,
On Tue 10-02-15 21:19:06, Tejun Heo wrote:
On Sat, Feb 07, 2015 at 09:38:39AM -0500, Tejun Heo wrote:
If we can argue that memcg and blkcg having different views is
meaningful and characterize and justify the behaviors stemming from
the deviation, sure, that'd be fine, but I
Hello, Greg.
On Fri, Feb 06, 2015 at 03:43:11PM -0800, Greg Thelen wrote:
> If cgroups are about isolation then writing to shared files should be
> rare, so I'm willing to say that we don't need to handle shared
> writers well. Shared readers seem like a more valuable use cases
> (thin
Hello, Greg.
On Fri, Feb 06, 2015 at 03:43:11PM -0800, Greg Thelen wrote:
If cgroups are about isolation then writing to shared files should be
rare, so I'm willing to say that we don't need to handle shared
writers well. Shared readers seem like a more valuable use cases
(thin
On Fri, Feb 6, 2015 at 6:17 AM, Tejun Heo wrote:
> Hello, Greg.
>
> On Thu, Feb 05, 2015 at 04:03:34PM -0800, Greg Thelen wrote:
>> So this is a system which charges all cgroups using a shared inode
>> (recharge on read) for all resident pages of that shared inode. There's
>> only one copy
Hello, Greg.
On Thu, Feb 05, 2015 at 04:03:34PM -0800, Greg Thelen wrote:
> So this is a system which charges all cgroups using a shared inode
> (recharge on read) for all resident pages of that shared inode. There's
> only one copy of the page in memory on just one LRU, but the page may
On Fri, Feb 6, 2015 at 6:17 AM, Tejun Heo t...@kernel.org wrote:
Hello, Greg.
On Thu, Feb 05, 2015 at 04:03:34PM -0800, Greg Thelen wrote:
So this is a system which charges all cgroups using a shared inode
(recharge on read) for all resident pages of that shared inode. There's
only
Hello, Greg.
On Thu, Feb 05, 2015 at 04:03:34PM -0800, Greg Thelen wrote:
So this is a system which charges all cgroups using a shared inode
(recharge on read) for all resident pages of that shared inode. There's
only one copy of the page in memory on just one LRU, but the page may be
On Thu, Feb 05 2015, Tejun Heo wrote:
> Hey,
>
> On Thu, Feb 05, 2015 at 02:05:19PM -0800, Greg Thelen wrote:
>> >A
>> >+-B(usage=2M lim=3M min=2M hosted_usage=2M)
>> > +-C (usage=0 lim=2M min=1M shared_usage=2M)
>> > +-D (usage=0 lim=2M min=1M shared_usage=2M)
>> >
Hey,
On Thu, Feb 05, 2015 at 02:05:19PM -0800, Greg Thelen wrote:
> > A
> > +-B(usage=2M lim=3M min=2M hosted_usage=2M)
> > +-C (usage=0 lim=2M min=1M shared_usage=2M)
> > +-D (usage=0 lim=2M min=1M shared_usage=2M)
> > \-E (usage=0 lim=2M min=0)
...
> Maybe,
On Thu, Feb 05 2015, Tejun Heo wrote:
> Hello, Greg.
>
> On Wed, Feb 04, 2015 at 03:51:01PM -0800, Greg Thelen wrote:
>> I think the linux-next low (and the TBD min) limits also have the
>> problem for more than just the root memcg. I'm thinking of a 2M file
>> shared between C and D below.
Hello, Greg.
On Wed, Feb 04, 2015 at 03:51:01PM -0800, Greg Thelen wrote:
> I think the linux-next low (and the TBD min) limits also have the
> problem for more than just the root memcg. I'm thinking of a 2M file
> shared between C and D below. The file will be charged to common parent
> B.
>
Hello, Greg.
On Wed, Feb 04, 2015 at 03:51:01PM -0800, Greg Thelen wrote:
I think the linux-next low (and the TBD min) limits also have the
problem for more than just the root memcg. I'm thinking of a 2M file
shared between C and D below. The file will be charged to common parent
B.
Hey,
On Thu, Feb 05, 2015 at 02:05:19PM -0800, Greg Thelen wrote:
A
+-B(usage=2M lim=3M min=2M hosted_usage=2M)
+-C (usage=0 lim=2M min=1M shared_usage=2M)
+-D (usage=0 lim=2M min=1M shared_usage=2M)
\-E (usage=0 lim=2M min=0)
...
Maybe, but I want
On Thu, Feb 05 2015, Tejun Heo wrote:
Hello, Greg.
On Wed, Feb 04, 2015 at 03:51:01PM -0800, Greg Thelen wrote:
I think the linux-next low (and the TBD min) limits also have the
problem for more than just the root memcg. I'm thinking of a 2M file
shared between C and D below. The file
On Thu, Feb 05 2015, Tejun Heo wrote:
Hey,
On Thu, Feb 05, 2015 at 02:05:19PM -0800, Greg Thelen wrote:
A
+-B(usage=2M lim=3M min=2M hosted_usage=2M)
+-C (usage=0 lim=2M min=1M shared_usage=2M)
+-D (usage=0 lim=2M min=1M shared_usage=2M)
\-E (usage=0
On Wed, Feb 04 2015, Tejun Heo wrote:
> Hello,
>
> On Tue, Feb 03, 2015 at 03:30:31PM -0800, Greg Thelen wrote:
>> If a machine has several top level memcg trying to get some form of
>> isolation (using low, min, soft limit) then a shared libc will be
>> moved to the root memcg where it's not
On Wed, Feb 04, 2015 at 08:58:21PM +0300, Konstantin Khlebnikov wrote:
> >>Generally incidental sharing could be handled as temporary sharing:
> >>default policy (if inode isn't pinned to memory cgroup) after some
> >>time should detect that inode is no longer shared and migrate it into
>
On 04.02.2015 20:15, Tejun Heo wrote:
Hello,
On Wed, Feb 04, 2015 at 01:49:08PM +0300, Konstantin Khlebnikov wrote:
I think important shared data must be handled and protected explicitly.
That 'catch-all' shared container could be separated into several
I kinda disagree. That'd be a major
Hello,
On Wed, Feb 04, 2015 at 01:49:08PM +0300, Konstantin Khlebnikov wrote:
> I think important shared data must be handled and protected explicitly.
> That 'catch-all' shared container could be separated into several
I kinda disagree. That'd be a major pain in the ass to use and you
wouldn't
Hello,
On Tue, Feb 03, 2015 at 03:30:31PM -0800, Greg Thelen wrote:
> If a machine has several top level memcg trying to get some form of
> isolation (using low, min, soft limit) then a shared libc will be
> moved to the root memcg where it's not protected from global memory
> pressure. At least
On 04.02.2015 02:30, Greg Thelen wrote:
On Mon, Feb 2, 2015 at 11:46 AM, Tejun Heo wrote:
Hey,
On Mon, Feb 02, 2015 at 10:26:44PM +0300, Konstantin Khlebnikov wrote:
Keeping shared inodes in common ancestor is reasonable.
We could schedule asynchronous moving when somebody opens or mmaps
On 04.02.2015 20:15, Tejun Heo wrote:
Hello,
On Wed, Feb 04, 2015 at 01:49:08PM +0300, Konstantin Khlebnikov wrote:
I think important shared data must be handled and protected explicitly.
That 'catch-all' shared container could be separated into several
I kinda disagree. That'd be a major
Hello,
On Wed, Feb 04, 2015 at 01:49:08PM +0300, Konstantin Khlebnikov wrote:
I think important shared data must be handled and protected explicitly.
That 'catch-all' shared container could be separated into several
I kinda disagree. That'd be a major pain in the ass to use and you
wouldn't
Hello,
On Tue, Feb 03, 2015 at 03:30:31PM -0800, Greg Thelen wrote:
If a machine has several top level memcg trying to get some form of
isolation (using low, min, soft limit) then a shared libc will be
moved to the root memcg where it's not protected from global memory
pressure. At least
On Wed, Feb 04, 2015 at 08:58:21PM +0300, Konstantin Khlebnikov wrote:
Generally incidental sharing could be handled as temporary sharing:
default policy (if inode isn't pinned to memory cgroup) after some
time should detect that inode is no longer shared and migrate it into
original cgroup.
On Wed, Feb 04 2015, Tejun Heo wrote:
Hello,
On Tue, Feb 03, 2015 at 03:30:31PM -0800, Greg Thelen wrote:
If a machine has several top level memcg trying to get some form of
isolation (using low, min, soft limit) then a shared libc will be
moved to the root memcg where it's not protected
On 04.02.2015 02:30, Greg Thelen wrote:
On Mon, Feb 2, 2015 at 11:46 AM, Tejun Heo t...@kernel.org wrote:
Hey,
On Mon, Feb 02, 2015 at 10:26:44PM +0300, Konstantin Khlebnikov wrote:
Keeping shared inodes in common ancestor is reasonable.
We could schedule asynchronous moving when somebody
On Mon, Feb 2, 2015 at 11:46 AM, Tejun Heo wrote:
> Hey,
>
> On Mon, Feb 02, 2015 at 10:26:44PM +0300, Konstantin Khlebnikov wrote:
>
>> Keeping shared inodes in common ancestor is reasonable.
>> We could schedule asynchronous moving when somebody opens or mmaps
>> inode from outside of its
On Mon, Feb 2, 2015 at 11:46 AM, Tejun Heo t...@kernel.org wrote:
Hey,
On Mon, Feb 02, 2015 at 10:26:44PM +0300, Konstantin Khlebnikov wrote:
Keeping shared inodes in common ancestor is reasonable.
We could schedule asynchronous moving when somebody opens or mmaps
inode from outside of its
Hey,
On Mon, Feb 02, 2015 at 10:26:44PM +0300, Konstantin Khlebnikov wrote:
> Removing memcg pointer from struct page might be tricky.
> It's not clear what to do with truncated pages: either link them
> with lru differently or remove from lru right at truncate.
> Swap cache pages have the same
On 30.01.2015 19:07, Tejun Heo wrote:
Hey, again.
On Fri, Jan 30, 2015 at 01:27:37AM -0500, Tejun Heo wrote:
The previous behavior was pretty unpredictable in terms of shared file
ownership too. I wonder whether the better thing to do here is either
charging cases like this to the common
Hey,
On Mon, Feb 02, 2015 at 10:26:44PM +0300, Konstantin Khlebnikov wrote:
Removing memcg pointer from struct page might be tricky.
It's not clear what to do with truncated pages: either link them
with lru differently or remove from lru right at truncate.
Swap cache pages have the same
On 30.01.2015 19:07, Tejun Heo wrote:
Hey, again.
On Fri, Jan 30, 2015 at 01:27:37AM -0500, Tejun Heo wrote:
The previous behavior was pretty unpredictable in terms of shared file
ownership too. I wonder whether the better thing to do here is either
charging cases like this to the common
Hey, again.
On Fri, Jan 30, 2015 at 01:27:37AM -0500, Tejun Heo wrote:
> The previous behavior was pretty unpredictable in terms of shared file
> ownership too. I wonder whether the better thing to do here is either
> charging cases like this to the common ancestor or splitting the
> charge
Hey, again.
On Fri, Jan 30, 2015 at 01:27:37AM -0500, Tejun Heo wrote:
The previous behavior was pretty unpredictable in terms of shared file
ownership too. I wonder whether the better thing to do here is either
charging cases like this to the common ancestor or splitting the
charge equally
Hello, Greg.
On Thu, Jan 29, 2015 at 09:55:53PM -0800, Greg Thelen wrote:
> I find simplification appealing. But I not sure it will fly, if for no
> other reason than the shared accountings. I'm ignoring intentional
> sharing, used by carefully crafted apps, and just thinking about
> incidental
On Thu, Jan 29 2015, Tejun Heo wrote:
> Hello,
>
> Since the cgroup writeback patchset[1] have been posted, several
> people brought up concerns about the complexity of allowing an inode
> to be dirtied against multiple cgroups is necessary for the purpose of
> writeback and it is true that a
Hello,
Since the cgroup writeback patchset[1] have been posted, several
people brought up concerns about the complexity of allowing an inode
to be dirtied against multiple cgroups is necessary for the purpose of
writeback and it is true that a significant amount of complexity (note
that bdi still
On Thu, Jan 29 2015, Tejun Heo wrote:
Hello,
Since the cgroup writeback patchset[1] have been posted, several
people brought up concerns about the complexity of allowing an inode
to be dirtied against multiple cgroups is necessary for the purpose of
writeback and it is true that a
Hello,
Since the cgroup writeback patchset[1] have been posted, several
people brought up concerns about the complexity of allowing an inode
to be dirtied against multiple cgroups is necessary for the purpose of
writeback and it is true that a significant amount of complexity (note
that bdi still
Hello, Greg.
On Thu, Jan 29, 2015 at 09:55:53PM -0800, Greg Thelen wrote:
I find simplification appealing. But I not sure it will fly, if for no
other reason than the shared accountings. I'm ignoring intentional
sharing, used by carefully crafted apps, and just thinking about
incidental
62 matches
Mail list logo