On Wed 29-05-13 16:01:54, Johannes Weiner wrote:
> On Wed, May 29, 2013 at 05:57:56PM +0200, Michal Hocko wrote:
> > On Wed 29-05-13 15:05:38, Michal Hocko wrote:
> > > On Mon 27-05-13 19:13:08, Michal Hocko wrote:
> > > [...]
> > > > Nevertheless I have encountered an issue while testing the huge
On Wed 29-05-13 16:54:00, Michal Hocko wrote:
[...]
> I am still running kbuild tests with the same configuration to see a
> more general workload.
And here we go with the kbuild numbers. Same configuration (mem=1G, one
group for kernel build - it is actually expand the three + build a
distro conf
On Wed, May 29, 2013 at 05:57:56PM +0200, Michal Hocko wrote:
> On Wed 29-05-13 15:05:38, Michal Hocko wrote:
> > On Mon 27-05-13 19:13:08, Michal Hocko wrote:
> > [...]
> > > Nevertheless I have encountered an issue while testing the huge number
> > > of groups scenario. And the issue is not limit
On Wed 29-05-13 15:05:38, Michal Hocko wrote:
> On Mon 27-05-13 19:13:08, Michal Hocko wrote:
> [...]
> > Nevertheless I have encountered an issue while testing the huge number
> > of groups scenario. And the issue is not limitted to only to this
> > scenario unfortunately. As memcg iterators use p
On Mon 27-05-13 19:13:08, Michal Hocko wrote:
[...]
> > I think that the numbers can be improved even without introducing
> > the list of groups in excess. One way to go could be introducing a
> > conditional (callback) to the memcg iterator so the groups under the
> > limit would be excluded durin
On Mon 27-05-13 19:13:08, Michal Hocko wrote:
[...]
> Nevertheless I have encountered an issue while testing the huge number
> of groups scenario. And the issue is not limitted to only to this
> scenario unfortunately. As memcg iterators use per node-zone-priority
> cache to prevent from over recla
Hi,
it took me a bit longer than I wanted but I was closed in a conference
room in the end of the last week so I didn't have much time.
On Mon 20-05-13 16:44:38, Michal Hocko wrote:
> On Fri 17-05-13 12:02:47, Johannes Weiner wrote:
> > On Mon, May 13, 2013 at 09:46:10AM +0200, Michal Hocko wrote:
On Mon 20-05-13 16:44:38, Michal Hocko wrote:
[...]
> I had one group (call it A) with the streaming IO load (dd if=/dev/zero
> of=file with 4*TotalRam size) and a parallel hierarchy with 2 groups
> with up to 12 levels each (512, 1024, 4096, 8192 groups) and no limit
> set. I have compared the re
On Fri 17-05-13 12:02:47, Johannes Weiner wrote:
> On Mon, May 13, 2013 at 09:46:10AM +0200, Michal Hocko wrote:
> > Memcg soft reclaim has been traditionally triggered from the global
> > reclaim paths before calling shrink_zone. mem_cgroup_soft_limit_reclaim
> > then picked up a group which excee
Hello,
On Fri, May 17, 2013 at 10:27 AM, Johannes Weiner wrote:
>>Hmmm... if the iteration is the problem, it shouldn't be difficult to
>>build list of children which should be iterated. Would that make it
>>acceptable?
>
> You mean, a separate structure that tracks which groups are in excess of
Tejun Heo wrote:
>Hello, Johannes.
>
>On Fri, May 17, 2013 at 12:02:47PM -0400, Johannes Weiner wrote:
>> There are setups with thousands of groups that do not even use soft
>> limits. Having them pointlessly iterate over all of them for every
>> couple of pages reclaimed is just not acceptabl
Hello, Johannes.
On Fri, May 17, 2013 at 12:02:47PM -0400, Johannes Weiner wrote:
> There are setups with thousands of groups that do not even use soft
> limits. Having them pointlessly iterate over all of them for every
> couple of pages reclaimed is just not acceptable.
Hmmm... if the iteratio
On Mon, May 13, 2013 at 09:46:10AM +0200, Michal Hocko wrote:
> Memcg soft reclaim has been traditionally triggered from the global
> reclaim paths before calling shrink_zone. mem_cgroup_soft_limit_reclaim
> then picked up a group which exceeds the soft limit the most and
> reclaimed it with 0 prio
On Thu 16-05-13 15:15:01, Tejun Heo wrote:
> One more thing,
>
> Given that this is a rather significant behavior change, it probably
> is a good idea to include the the benchmark results from the head
> message?
The testing I have done was on top of the complete series. The last
patch should be
On Thu 16-05-13 15:12:00, Tejun Heo wrote:
> Sorry about the delay. Just getting back to memcg.
>
> On Mon, May 13, 2013 at 09:46:10AM +0200, Michal Hocko wrote:
> ...
> > during the first pass. Only groups which are over their soft limit or
> > any of their parents up the hierarchy is over the l
One more thing,
Given that this is a rather significant behavior change, it probably
is a good idea to include the the benchmark results from the head
message?
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.
Sorry about the delay. Just getting back to memcg.
On Mon, May 13, 2013 at 09:46:10AM +0200, Michal Hocko wrote:
...
> during the first pass. Only groups which are over their soft limit or
> any of their parents up the hierarchy is over the limit are considered
ancestors?
> +static void shrink_
On 05/13/2013 11:46 AM, Michal Hocko wrote:
> Memcg soft reclaim has been traditionally triggered from the global
> reclaim paths before calling shrink_zone. mem_cgroup_soft_limit_reclaim
> then picked up a group which exceeds the soft limit the most and
> reclaimed it with 0 priority to reclaim at
Memcg soft reclaim has been traditionally triggered from the global
reclaim paths before calling shrink_zone. mem_cgroup_soft_limit_reclaim
then picked up a group which exceeds the soft limit the most and
reclaimed it with 0 priority to reclaim at least SWAP_CLUSTER_MAX pages.
The infrastructure r
19 matches
Mail list logo