On Thu 23-06-16 13:53:12, Mel Gorman wrote:
> On Wed, Jun 22, 2016 at 04:27:57PM +0200, Michal Hocko wrote:
> > > which can use it (e.g. vmalloc). I understand how this is both an
> > > inherent problem of 32b with a larger high:low ratio and why it is hard
> > > to at least pretend we can cope with it with node based approach but we
> > > should at least document it.
> > > 
> > > I workaround would be to enable highmem_dirtyable_memory which can lead
> > > to premature OOM killer for some workloads AFAIR.
> > [...]
> > > >  static unsigned long highmem_dirtyable_memory(unsigned long total)
> > > >  {
> > > >  #ifdef CONFIG_HIGHMEM
> > > > -       int node;
> > > >         unsigned long x = 0;
> > > > -       int i;
> > > > -
> > > > -       for_each_node_state(node, N_HIGH_MEMORY) {
> > > > -               for (i = 0; i < MAX_NR_ZONES; i++) {
> > > > -                       struct zone *z = 
> > > > &NODE_DATA(node)->node_zones[i];
> > > >  
> > > > -                       if (is_highmem(z))
> > > > -                               x += zone_dirtyable_memory(z);
> > > > -               }
> > > > -       }
> > 
> > Hmm, I have just noticed that we have NR_ZONE_LRU_ANON resp.
> > NR_ZONE_LRU_FILE so we can estimate the amount of highmem contribution
> > to the global counters by the following or similar:
> > 
> >     for_each_node_state(node, N_HIGH_MEMORY) {
> >             for (i = 0; i < MAX_NR_ZONES; i++) {
> >                     struct zone *z = &NODE_DATA(node)->node_zones[i];
> > 
> >                     if (!is_highmem(z))
> >                             continue;
> > 
> >                     x += zone_page_state(z, NR_FREE_PAGES) + 
> > zone_page_state(z, NR_ZONE_LRU_FILE) - high_wmark_pages(zone);
> >             }
> > 
> > high wmark reduction would be to emulate the reserve. What do you think?
> 
> Agreed with minor modifications. Went with this
> 
>         for_each_node_state(node, N_HIGH_MEMORY) {
>                 for (i = ZONE_NORMAL + 1; i < MAX_NR_ZONES; i++) {
>                         struct zone *z;
> 
>                         if (!is_highmem_idx(z))
>                                 continue;
> 
>                         z = &NODE_DATA(node)->node_zones[i];
>                         x += zone_page_state(z, NR_FREE_PAGES) +
>                                 zone_page_state(z, NR_ZONE_LRU_FILE) -
>                                 high_wmark_pages(zone);

I guess you will still need an underflow protection. Because both free +
lru pages might be below high wmark.

                        dirtyable += zone_page_state(z, NR_FREE_PAGES) +
                                        zone_page_state(z, NR_ZONE_LRU_FILE);
                        if (dirtyable > high_wmark_pages(zone)
                                dirtyable -= high_wmark_pages(zone);

                        x += dirtyable;
-- 
Michal Hocko
SUSE Labs

Reply via email to