On Thu 16-06-16 08:19 PM, Michal Hocko wrote:
>
> On Thu 16-06-16 18:08:57, Odzioba, Lukasz wrote:
> I am not able to find clear reasons why we shouldn't do it for the rest.
> Ok so what do we do now? I'll send v2 with proposed changes.
> Then do we still want to have stats on those pvecs?
> In my
On Thu 16-06-16 18:08:57, Odzioba, Lukasz wrote:
> On Thru 09-06-16 02:22 PM Michal Hocko wrote:
> > I agree it would be better to do the same for others as well. Even if
> > this is not an immediate problem for those.
>
> I am not able to find clear reasons why we shouldn't do it for the rest.
>
On Thru 09-06-16 02:22 PM Michal Hocko wrote:
> I agree it would be better to do the same for others as well. Even if
> this is not an immediate problem for those.
I am not able to find clear reasons why we shouldn't do it for the rest.
Ok so what do we do now? I'll send v2 with proposed changes.
On 09-06-16 17:42:00, Dave Hansen wrote:
> Does your workload put large pages in and out of those pvecs, though?
> If your system doesn't have any activity, then all we've shown is that
> they're not a problem when not in use. But what about when we use them?
It doesn't. To use them extensively I
On 06/09/2016 01:50 AM, Odzioba, Lukasz wrote:
> On 08-06-16 17:31:00, Dave Hansen wrote:
>> Do we have any statistics that tell us how many pages are sitting the
>> lru pvecs? Although this helps the problem overall, don't we still have
>> a problem with memory being held in such an opaque place?
On Wed 08-06-16 09:34:01, Dave Hansen wrote:
> On 06/08/2016 09:06 AM, Michal Hocko wrote:
> >> > Do we have any statistics that tell us how many pages are sitting the
> >> > lru pvecs? Although this helps the problem overall, don't we still have
> >> > a problem with memory being held in such an
On 08-06-16 17:31:00, Dave Hansen wrote:
> Do we have any statistics that tell us how many pages are sitting the
> lru pvecs? Although this helps the problem overall, don't we still have
> a problem with memory being held in such an opaque place?
>From what I observed the problem is mainly with l
On Wed 08-07-16 17:04:00, Michal Hocko wrote:
> I do not see how a SIGTERM would make any difference. But see below.
This is how we encounter this problem initially, by hitting ctr-c while
running parallel memory intensive workload, which ended up
not calling munmap on allocated memory.
> Is th
On 06/08/2016 09:06 AM, Michal Hocko wrote:
>> > Do we have any statistics that tell us how many pages are sitting the
>> > lru pvecs? Although this helps the problem overall, don't we still have
>> > a problem with memory being held in such an opaque place?
> Is it really worth bothering when we
On Wed 08-06-16 08:31:21, Dave Hansen wrote:
> On 06/08/2016 07:35 AM, Lukasz Odzioba wrote:
> > diff --git a/mm/swap.c b/mm/swap.c
> > index 9591614..3fe4f18 100644
> > --- a/mm/swap.c
> > +++ b/mm/swap.c
> > @@ -391,9 +391,8 @@ static void __lru_cache_add(struct page *page)
> > struct pagevec
On 06/08/2016 07:35 AM, Lukasz Odzioba wrote:
> diff --git a/mm/swap.c b/mm/swap.c
> index 9591614..3fe4f18 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -391,9 +391,8 @@ static void __lru_cache_add(struct page *page)
> struct pagevec *pvec = &get_cpu_var(lru_add_pvec);
>
> get_page
On Wed 08-06-16 16:35:37, Lukasz Odzioba wrote:
> When the application does not exit cleanly (i.e. SIGTERM) we might
I do not see how a SIGTERM would make any difference. But see below.
> end up with some pages in lru_add_pvec, which is ok. With THP
> enabled huge pages may also end up on per cpu
12 matches
Mail list logo