On Thu, Dec 29, 2016 at 10:04:32AM +0100, Michal Hocko wrote:
> On Thu 29-12-16 10:20:26, Minchan Kim wrote:
> > On Tue, Dec 27, 2016 at 04:55:33PM +0100, Michal Hocko wrote:
> > > Hi,
> > > could you try to run with the following patch on top of the previous
> > &
size = lruvec_lru_size(lruvec, lru);
> + size = lruvec_lru_size_zone_idx(lruvec, lru,
> sc->reclaim_idx);
> scan = size >> sc->priority;
>
> if (!scan && pass && force_scan)
> --
>
On Thu, Dec 29, 2016 at 09:31:54AM +0900, Minchan Kim wrote:
> On Mon, Dec 26, 2016 at 01:48:40PM +0100, Michal Hocko wrote:
> > On Fri 23-12-16 23:26:00, Nils Holland wrote:
> > > On Fri, Dec 23, 2016 at 03:47:39PM +0100, Michal Hocko wrote:
> > > >
> > > &
when memcg is enabled. Introduce
> helper lruvec_zone_lru_size which redirects to either zone counters or
> mem_cgroup_get_zone_lru_size when appropriate.
>
> We are loosing empty LRU but non-zero lru size detection introduced by
> ca707239e8a7 ("mm: update_lru_size warn and reset ba
per-zone level, where the distance between reclaim and the dirty pages
> is mostly much smaller in absolute numbers.
>
> Signed-off-by: Johannes Weiner
> Reviewed-by: Rik van Riel
Reviewed-by: Minchan Kim
--
Kinds regards,
Minchan Kim
--
To unsubscribe from this list: send the line "un
On Wed, Sep 28, 2011 at 09:50:54AM +0200, Johannes Weiner wrote:
> On Wed, Sep 28, 2011 at 01:55:51PM +0900, Minchan Kim wrote:
> > Hi Hannes,
> >
> > On Fri, Sep 23, 2011 at 04:38:17PM +0200, Johannes Weiner wrote:
> > > The amount of dirtyable pages should
On Wed, Sep 28, 2011 at 09:11:54AM +0200, Johannes Weiner wrote:
> On Wed, Sep 28, 2011 at 02:56:40PM +0900, Minchan Kim wrote:
> > On Fri, Sep 23, 2011 at 04:42:48PM +0200, Johannes Weiner wrote:
> > > The maximum number of dirty pages that exist in the system at any time
> &
On Tue, Sep 20, 2011 at 03:45:14PM +0200, Johannes Weiner wrote:
> Tell the page allocator that pages allocated through
> grab_cache_page_write_begin() are expected to become dirty soon.
>
> Signed-off-by: Johannes Weiner
Reviewed-by: Minchan Kim
--
Kinds regards,
Minch
; their relationship is more apparent and that they can be commented on
> as a group.
>
> Signed-off-by: Johannes Weiner
Reviewed-by: Minchan Kim
--
Kinds regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
ng into direct reclaim or even failing the
> allocation, until a future patch changes the global dirty throttling
> and flusher thread activation so that they take individual zone states
> into account.
>
> Signed-off-by: Johannes Weiner
Otherwise, looks good to me.
Review
+ * regarded as dirtyable memory, to prevent a
> + * situation where reclaim has to clean pages
> + * in order to balance the zones.
> + */
Could you put Mel's description instead of it if you don
for the data can also be freed.
> Also, set the superblock's cleancache_poolid to be invalid
> and, in cleancache, recycle the poolid so a subsequent init_fs
> operation can reuse it.
>
> That's all!
>
> Thanks,
> Dan
>
At least, I didn't confused your semantics except just flush. That's
why I suggested only flush but after seeing your explaining, there is
another thing I want to change. The get/put is common semantic of
reference counting in kernel but in POV your semantics, it makes sense
to me but get has a exclusive semantic so I want to represent it with
API name. Maybe cleancache_get_page_exclusive.
The summary is that I don't want to change all API name. Just two thing.
(I am not sure you and others agree on me. It's just suggestion).
1. cleancache_flush_page -> cleancache_[invalidate|remove]_page
2. cleancache_get_page -> cleancache_get_page_exclusive
BTW, Nice description.
Please include it in documentation if we can't reach the conclusion.
It will help others to understand semantic of cleancache.
Thanks, Dan.
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
validate any existing cleancache entries. We can't leave
> + * stale data around in the cleancache once our page is gone
> + */
> + if (PageUptodate(page) && PageMappedToDisk(page))
> + cleancache_put_page(page);
> + else
>
On Wed, Feb 16, 2011 at 10:27 AM, Dan Magenheimer
wrote:
>> -Original Message-
>> From: Matt [mailto:jackdac...@gmail.com]
>> Sent: Tuesday, February 15, 2011 5:12 PM
>> To: Minchan Kim
>> Cc: Dan Magenheimer; gre...@suse.de; Chris Mason; linux-
&g
t_page(), if it wants this feature.
filemap fault works only in case of file-backed page which is mapped
but don't work not-mapped cache page. So we could miss cache page by
read system call if we move it into filemap_fault.
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list:
t_page(), if it wants this feature.
filemap fault works only in case of file-backed page which is mapped
but don't work not-mapped cache page. So we could miss cache page by
read system call if we move it into filemap_fault.
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list:
oblem for Xen is
> described in the tmem internals document that I think
> I pointed to earlier here:
> http://oss.oracle.com/projects/tmem/documentation/internals/
I will read it when I have a time.
Thanks for quick reply but I can't.
It's time to sleep and weekend.
See yo
f backend.
so many page scanning/reclaiming could be happen.
It means hot pages can be discarded with this patch.
But it's a just guessing.
So we need number with testcase we can measure I/O and system
responsivness.
>
> Thanks,
> Nitin
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
l your hooks in no limited place.
Maybe you already have done it. :)
3)
Please consider system memory pressure.
Without this, PFRA might reclaim the page but cleancache's
backend(non-virtualized)
may consume another page for putting clean page. It could change system behivor
althought it can
internals-v01.html
>
>
> Or did you mean a cleancache_ops "backend"? For tmem, there
> is one file linux/drivers/xen/tmem.c and it interfaces between
> the cleancache_ops calls and Xen hypercalls. It should be in
> a Xenlinux pv_ops tree soon, or I can email it soone
page->index,
+ page);
+ if (ret == CLEANCACHE_GET_PAGE_SUCCESS)
+ succ_gets++;
+ else
+ failed_gets++;
+ }
+ return ret;
+}
+EXPORT_SYMBO
21 matches
Mail list logo