On Wed 13-08-14 16:41:34, Johannes Weiner wrote:
> On Wed, Aug 13, 2014 at 04:59:04PM +0200, Michal Hocko wrote:
[...]
> > I think this shows up that my concern about excessive reclaim and stalls
> > is real and it is worse when the memory is used sparsely. It is true it
> > might help when the who
On Wed, Aug 13, 2014 at 04:59:04PM +0200, Michal Hocko wrote:
> On Fri 08-08-14 09:26:35, Johannes Weiner wrote:
> > On Fri, Aug 08, 2014 at 02:32:58PM +0200, Michal Hocko wrote:
> > > On Thu 07-08-14 11:31:41, Johannes Weiner wrote:
> [...]
> > > > THP latencies are actually the same when comparin
On Fri 08-08-14 09:26:35, Johannes Weiner wrote:
> On Fri, Aug 08, 2014 at 02:32:58PM +0200, Michal Hocko wrote:
> > On Thu 07-08-14 11:31:41, Johannes Weiner wrote:
[...]
> > > THP latencies are actually the same when comparing high limit nr_pages
> > > reclaim with the current hard limit SWAP_CLU
On Fri 08-08-14 09:26:35, Johannes Weiner wrote:
> On Fri, Aug 08, 2014 at 02:32:58PM +0200, Michal Hocko wrote:
> > On Thu 07-08-14 11:31:41, Johannes Weiner wrote:
[...]
> > > although system time is reduced with the high limit.
> > > High limit reclaim with SWAP_CLUSTER_MAX has better fault late
On Fri, Aug 08, 2014 at 02:32:58PM +0200, Michal Hocko wrote:
> On Thu 07-08-14 11:31:41, Johannes Weiner wrote:
> > On Thu, Aug 07, 2014 at 03:08:22PM +0200, Michal Hocko wrote:
> > > On Mon 04-08-14 17:14:54, Johannes Weiner wrote:
> > > > Instead of passing the request size to direct reclaim, me
On Thu 07-08-14 09:10:43, Greg Thelen wrote:
> On Thu, Aug 07 2014, Johannes Weiner wrote:
[...]
> > So what I'm proposing works and is of equal quality from a THP POV.
> > This change is complicated enough when we stick to the facts, let's
> > not make up things based on gut feeling.
>
> I think
On Thu 07-08-14 11:31:41, Johannes Weiner wrote:
> On Thu, Aug 07, 2014 at 03:08:22PM +0200, Michal Hocko wrote:
> > On Mon 04-08-14 17:14:54, Johannes Weiner wrote:
> > > Instead of passing the request size to direct reclaim, memcg just
> > > manually loops around reclaiming SWAP_CLUSTER_MAX pages
On Thu, Aug 07 2014, Johannes Weiner wrote:
> On Thu, Aug 07, 2014 at 03:08:22PM +0200, Michal Hocko wrote:
>> On Mon 04-08-14 17:14:54, Johannes Weiner wrote:
>> > Instead of passing the request size to direct reclaim, memcg just
>> > manually loops around reclaiming SWAP_CLUSTER_MAX pages until
On Thu, Aug 07, 2014 at 03:08:22PM +0200, Michal Hocko wrote:
> On Mon 04-08-14 17:14:54, Johannes Weiner wrote:
> > Instead of passing the request size to direct reclaim, memcg just
> > manually loops around reclaiming SWAP_CLUSTER_MAX pages until the
> > charge can succeed. That potentially wast
On Mon 04-08-14 17:14:54, Johannes Weiner wrote:
> Instead of passing the request size to direct reclaim, memcg just
> manually loops around reclaiming SWAP_CLUSTER_MAX pages until the
> charge can succeed. That potentially wastes scan progress when huge
> page allocations require multiple invocat
Instead of passing the request size to direct reclaim, memcg just
manually loops around reclaiming SWAP_CLUSTER_MAX pages until the
charge can succeed. That potentially wastes scan progress when huge
page allocations require multiple invocations, which always have to
restart from the default scan
11 matches
Mail list logo