On Tue 07-06-16 13:20:00, Michal Hocko wrote:
> I guess you want something like posix_memalign or start faulting in from
> an aligned address to guarantee you will fault 2MB pages.
Good catch.
> Besides that I am really suspicious that this will be measurable at all.
> I would just go and spin
On Tue 07-06-16 13:20:00, Michal Hocko wrote:
> I guess you want something like posix_memalign or start faulting in from
> an aligned address to guarantee you will fault 2MB pages.
Good catch.
> Besides that I am really suspicious that this will be measurable at all.
> I would just go and spin
On Tue 07-06-16 09:02:02, Odzioba, Lukasz wrote:
[...]
> //compile with: gcc bench.c -o bench_2M -fopenmp
> //compile with: gcc -D SMALL_PAGES bench.c -o bench_4K -fopenmp
> #include
> #include
> #include
>
> #define MAP_HUGE_SHIFT 26
> #define MAP_HUGE_2MB(21 << MAP_HUGE_SHIFT)
>
>
On Tue 07-06-16 09:02:02, Odzioba, Lukasz wrote:
[...]
> //compile with: gcc bench.c -o bench_2M -fopenmp
> //compile with: gcc -D SMALL_PAGES bench.c -o bench_4K -fopenmp
> #include
> #include
> #include
>
> #define MAP_HUGE_SHIFT 26
> #define MAP_HUGE_2MB(21 << MAP_HUGE_SHIFT)
>
>
On Wed 05-11-16 09:53:00, Michal Hocko wrote:
> Yes I think this makes sense. The only case where it would be suboptimal
> is when the pagevec was already full and then we just created a single
> page pvec to drain it. This can be handled better though by:
>
> diff --git a/mm/swap.c b/mm/swap.c
>
On Wed 05-11-16 09:53:00, Michal Hocko wrote:
> Yes I think this makes sense. The only case where it would be suboptimal
> is when the pagevec was already full and then we just created a single
> page pvec to drain it. This can be handled better though by:
>
> diff --git a/mm/swap.c b/mm/swap.c
>
On Wed 05-11-16 09:53:00, Michal Hocko wrote:
> Yes I think this makes sense. The only case where it would be suboptimal
> is when the pagevec was already full and then we just created a single
> page pvec to drain it. This can be handled better though by:
>
> diff --git a/mm/swap.c b/mm/swap.c
>
On Wed 05-11-16 09:53:00, Michal Hocko wrote:
> Yes I think this makes sense. The only case where it would be suboptimal
> is when the pagevec was already full and then we just created a single
> page pvec to drain it. This can be handled better though by:
>
> diff --git a/mm/swap.c b/mm/swap.c
>
On 05/11/2016 09:53 AM, Michal Hocko wrote:
On Fri 06-05-16 09:04:34, Dave Hansen wrote:
On 05/06/2016 08:10 AM, Odzioba, Lukasz wrote:
On Thu 05-05-16 09:21:00, Michal Hocko wrote:
Or maybe the async nature of flushing turns
out to be just impractical and unreliable and we will end up
On 05/11/2016 09:53 AM, Michal Hocko wrote:
On Fri 06-05-16 09:04:34, Dave Hansen wrote:
On 05/06/2016 08:10 AM, Odzioba, Lukasz wrote:
On Thu 05-05-16 09:21:00, Michal Hocko wrote:
Or maybe the async nature of flushing turns
out to be just impractical and unreliable and we will end up
On Fri 06-05-16 09:04:34, Dave Hansen wrote:
> On 05/06/2016 08:10 AM, Odzioba, Lukasz wrote:
> > On Thu 05-05-16 09:21:00, Michal Hocko wrote:
> >> Or maybe the async nature of flushing turns
> >> out to be just impractical and unreliable and we will end up skipping
> >> THP (or all compound
On Fri 06-05-16 09:04:34, Dave Hansen wrote:
> On 05/06/2016 08:10 AM, Odzioba, Lukasz wrote:
> > On Thu 05-05-16 09:21:00, Michal Hocko wrote:
> >> Or maybe the async nature of flushing turns
> >> out to be just impractical and unreliable and we will end up skipping
> >> THP (or all compound
On Thu 05-05-16 17:25:07, Odzioba, Lukasz wrote:
> On Thu 05-05-16 09:21:00, Michal Hocko wrote:
> > OK, it wasn't that tricky afterall. Maybe I have missed something but
> > the following should work. Or maybe the async nature of flushing turns
> > out to be just impractical and unreliable and
On Thu 05-05-16 17:25:07, Odzioba, Lukasz wrote:
> On Thu 05-05-16 09:21:00, Michal Hocko wrote:
> > OK, it wasn't that tricky afterall. Maybe I have missed something but
> > the following should work. Or maybe the async nature of flushing turns
> > out to be just impractical and unreliable and
On 05/06/2016 08:10 AM, Odzioba, Lukasz wrote:
> On Thu 05-05-16 09:21:00, Michal Hocko wrote:
>> Or maybe the async nature of flushing turns
>> out to be just impractical and unreliable and we will end up skipping
>> THP (or all compound pages) for pcp LRU add cache. Let's see...
>
> What if we
On 05/06/2016 08:10 AM, Odzioba, Lukasz wrote:
> On Thu 05-05-16 09:21:00, Michal Hocko wrote:
>> Or maybe the async nature of flushing turns
>> out to be just impractical and unreliable and we will end up skipping
>> THP (or all compound pages) for pcp LRU add cache. Let's see...
>
> What if we
On Thu 05-05-16 09:21:00, Michal Hocko wrote:
> Or maybe the async nature of flushing turns
> out to be just impractical and unreliable and we will end up skipping
> THP (or all compound pages) for pcp LRU add cache. Let's see...
What if we simply skip lru_add pvecs for compound pages?
That way
On Thu 05-05-16 09:21:00, Michal Hocko wrote:
> Or maybe the async nature of flushing turns
> out to be just impractical and unreliable and we will end up skipping
> THP (or all compound pages) for pcp LRU add cache. Let's see...
What if we simply skip lru_add pvecs for compound pages?
That way
On Thu 05-05-16 09:21:00, Michal Hocko wrote:
> OK, it wasn't that tricky afterall. Maybe I have missed something but
> the following should work. Or maybe the async nature of flushing turns
> out to be just impractical and unreliable and we will end up skipping
> THP (or all compound pages) for
On Thu 05-05-16 09:21:00, Michal Hocko wrote:
> OK, it wasn't that tricky afterall. Maybe I have missed something but
> the following should work. Or maybe the async nature of flushing turns
> out to be just impractical and unreliable and we will end up skipping
> THP (or all compound pages) for
On Wed 04-05-16 22:36:43, Michal Hocko wrote:
> On Wed 04-05-16 19:41:59, Odzioba, Lukasz wrote:
[...]
> > I have an app which allocates almost all of the memory from numa node and
> > with just second patch and 100 consecutive executions 30-50% got killed.
>
> This is still not acceptable. So I
On Wed 04-05-16 22:36:43, Michal Hocko wrote:
> On Wed 04-05-16 19:41:59, Odzioba, Lukasz wrote:
[...]
> > I have an app which allocates almost all of the memory from numa node and
> > with just second patch and 100 consecutive executions 30-50% got killed.
>
> This is still not acceptable. So I
On Wed 04-05-16 19:41:59, Odzioba, Lukasz wrote:
> On Thu 02-05-16 03:00:00, Michal Hocko wrote:
> > So I have given this a try (not tested yet) and it doesn't look terribly
> > complicated. It is hijacking vmstat for a purpose it wasn't intended for
> > originally but creating a dedicated kenrnel
On Wed 04-05-16 19:41:59, Odzioba, Lukasz wrote:
> On Thu 02-05-16 03:00:00, Michal Hocko wrote:
> > So I have given this a try (not tested yet) and it doesn't look terribly
> > complicated. It is hijacking vmstat for a purpose it wasn't intended for
> > originally but creating a dedicated kenrnel
On 05/04/2016 12:41 PM, Odzioba, Lukasz wrote:
> Do you see any advantages of dropping THP from pagevecs over this solution?
It's a more foolproof solution. Even with this patch, there might still
be some corner cases where the draining doesn't occur. That "two
minutes" might be come 20 or 200
On 05/04/2016 12:41 PM, Odzioba, Lukasz wrote:
> Do you see any advantages of dropping THP from pagevecs over this solution?
It's a more foolproof solution. Even with this patch, there might still
be some corner cases where the draining doesn't occur. That "two
minutes" might be come 20 or 200
On Thu 02-05-16 03:00:00, Michal Hocko wrote:
> So I have given this a try (not tested yet) and it doesn't look terribly
> complicated. It is hijacking vmstat for a purpose it wasn't intended for
> originally but creating a dedicated kenrnel threads/WQ sounds like an
> overkill to me. Does this
On Thu 02-05-16 03:00:00, Michal Hocko wrote:
> So I have given this a try (not tested yet) and it doesn't look terribly
> complicated. It is hijacking vmstat for a purpose it wasn't intended for
> originally but creating a dedicated kenrnel threads/WQ sounds like an
> overkill to me. Does this
On Tue, May 03, 2016 at 09:37:57AM +0200, Michal Hocko wrote:
> On Mon 02-05-16 19:02:50, Kirill A. Shutemov wrote:
> > On Mon, May 02, 2016 at 08:49:03AM -0700, Dave Hansen wrote:
> > > On 05/02/2016 08:01 AM, Kirill A. Shutemov wrote:
> > > > On Mon, May 02, 2016 at 04:39:35PM +0200, Vlastimil
On Tue, May 03, 2016 at 09:37:57AM +0200, Michal Hocko wrote:
> On Mon 02-05-16 19:02:50, Kirill A. Shutemov wrote:
> > On Mon, May 02, 2016 at 08:49:03AM -0700, Dave Hansen wrote:
> > > On 05/02/2016 08:01 AM, Kirill A. Shutemov wrote:
> > > > On Mon, May 02, 2016 at 04:39:35PM +0200, Vlastimil
On Mon 02-05-16 19:02:50, Kirill A. Shutemov wrote:
> On Mon, May 02, 2016 at 08:49:03AM -0700, Dave Hansen wrote:
> > On 05/02/2016 08:01 AM, Kirill A. Shutemov wrote:
> > > On Mon, May 02, 2016 at 04:39:35PM +0200, Vlastimil Babka wrote:
> > >> On 04/27/2016 07:11 PM, Dave Hansen wrote:
> > >>>
On Mon 02-05-16 19:02:50, Kirill A. Shutemov wrote:
> On Mon, May 02, 2016 at 08:49:03AM -0700, Dave Hansen wrote:
> > On 05/02/2016 08:01 AM, Kirill A. Shutemov wrote:
> > > On Mon, May 02, 2016 at 04:39:35PM +0200, Vlastimil Babka wrote:
> > >> On 04/27/2016 07:11 PM, Dave Hansen wrote:
> > >>>
On Mon, May 02, 2016 at 08:49:03AM -0700, Dave Hansen wrote:
> On 05/02/2016 08:01 AM, Kirill A. Shutemov wrote:
> > On Mon, May 02, 2016 at 04:39:35PM +0200, Vlastimil Babka wrote:
> >> On 04/27/2016 07:11 PM, Dave Hansen wrote:
> >>> 6. Perhaps don't use the LRU pagevecs for large pages. It
On Mon, May 02, 2016 at 08:49:03AM -0700, Dave Hansen wrote:
> On 05/02/2016 08:01 AM, Kirill A. Shutemov wrote:
> > On Mon, May 02, 2016 at 04:39:35PM +0200, Vlastimil Babka wrote:
> >> On 04/27/2016 07:11 PM, Dave Hansen wrote:
> >>> 6. Perhaps don't use the LRU pagevecs for large pages. It
On 05/02/2016 08:01 AM, Kirill A. Shutemov wrote:
> On Mon, May 02, 2016 at 04:39:35PM +0200, Vlastimil Babka wrote:
>> On 04/27/2016 07:11 PM, Dave Hansen wrote:
>>> 6. Perhaps don't use the LRU pagevecs for large pages. It limits the
>>>severity of the problem.
>>
>> I think that makes
On 05/02/2016 08:01 AM, Kirill A. Shutemov wrote:
> On Mon, May 02, 2016 at 04:39:35PM +0200, Vlastimil Babka wrote:
>> On 04/27/2016 07:11 PM, Dave Hansen wrote:
>>> 6. Perhaps don't use the LRU pagevecs for large pages. It limits the
>>>severity of the problem.
>>
>> I think that makes
On 05/02/2016 05:01 PM, Kirill A. Shutemov wrote:
On Mon, May 02, 2016 at 04:39:35PM +0200, Vlastimil Babka wrote:
On 04/27/2016 07:11 PM, Dave Hansen wrote:
6. Perhaps don't use the LRU pagevecs for large pages. It limits the
severity of the problem.
I think that makes sense. Being
On 05/02/2016 05:01 PM, Kirill A. Shutemov wrote:
On Mon, May 02, 2016 at 04:39:35PM +0200, Vlastimil Babka wrote:
On 04/27/2016 07:11 PM, Dave Hansen wrote:
6. Perhaps don't use the LRU pagevecs for large pages. It limits the
severity of the problem.
I think that makes sense. Being
On Mon, May 02, 2016 at 04:39:35PM +0200, Vlastimil Babka wrote:
> On 04/27/2016 07:11 PM, Dave Hansen wrote:
> >6. Perhaps don't use the LRU pagevecs for large pages. It limits the
> >severity of the problem.
>
> I think that makes sense. Being large already amortizes the cost per base
>
On Mon, May 02, 2016 at 04:39:35PM +0200, Vlastimil Babka wrote:
> On 04/27/2016 07:11 PM, Dave Hansen wrote:
> >6. Perhaps don't use the LRU pagevecs for large pages. It limits the
> >severity of the problem.
>
> I think that makes sense. Being large already amortizes the cost per base
>
On 04/27/2016 07:11 PM, Dave Hansen wrote:
6. Perhaps don't use the LRU pagevecs for large pages. It limits the
severity of the problem.
I think that makes sense. Being large already amortizes the cost per
base page much more than pagevecs do (512 vs ~22 pages?).
On 04/27/2016 07:11 PM, Dave Hansen wrote:
6. Perhaps don't use the LRU pagevecs for large pages. It limits the
severity of the problem.
I think that makes sense. Being large already amortizes the cost per
base page much more than pagevecs do (512 vs ~22 pages?).
On Thu 28-04-16 16:37:10, Michal Hocko wrote:
[...]
> 7. Hook into vmstat and flush from there? This would drain them
> periodically but it would also introduce an undeterministic interference
> as well.
So I have given this a try (not tested yet) and it doesn't look terribly
complicated. It is
On Thu 28-04-16 16:37:10, Michal Hocko wrote:
[...]
> 7. Hook into vmstat and flush from there? This would drain them
> periodically but it would also introduce an undeterministic interference
> as well.
So I have given this a try (not tested yet) and it doesn't look terribly
complicated. It is
On Wed 27-04-16 10:11:04, Dave Hansen wrote:
> On 04/27/2016 10:01 AM, Odzioba, Lukasz wrote:
[...]
> > 1. We need some statistics on the number and total *SIZES* of all pages
> >in the lru pagevecs. It's too opaque now.
> > 2. We need to make darn sure we drain the lru pagevecs before
On Wed 27-04-16 10:11:04, Dave Hansen wrote:
> On 04/27/2016 10:01 AM, Odzioba, Lukasz wrote:
[...]
> > 1. We need some statistics on the number and total *SIZES* of all pages
> >in the lru pagevecs. It's too opaque now.
> > 2. We need to make darn sure we drain the lru pagevecs before
On 04/27/2016 10:01 AM, Odzioba, Lukasz wrote:
> Pieces of the puzzle:
> A) after process termination memory is not getting freed nor accounted as free
I don't think this part is necessarily a bug. As long as we have stats
*somewhere*, and we really do "reclaim" them, I don't think we need to
On 04/27/2016 10:01 AM, Odzioba, Lukasz wrote:
> Pieces of the puzzle:
> A) after process termination memory is not getting freed nor accounted as free
I don't think this part is necessarily a bug. As long as we have stats
*somewhere*, and we really do "reclaim" them, I don't think we need to
Hi,
I encounter a problem which I'd like to discuss here (tested on 3.10 and 4.5).
While running some workloads we noticed that in case of "improper" application
exit (like SIGTERM) quite a bit (a few GBs) of memory is not being reclaimed
after process termination.
Executing echo 1 >
Hi,
I encounter a problem which I'd like to discuss here (tested on 3.10 and 4.5).
While running some workloads we noticed that in case of "improper" application
exit (like SIGTERM) quite a bit (a few GBs) of memory is not being reclaimed
after process termination.
Executing echo 1 >
50 matches
Mail list logo