Hi!
> >
> > And if there is a meaningful way to make the kernel behave better, that
> > would
> > obviously be of huge value too.
> >
> > Thanks
> > Daniel
>
> Hi,
>
> Is there a way for an application to be told that there is a memory
> pressure situation?
> For example, say I do a "make -j32"
On Wed 21-08-19 22:42:29, James Courtier-Dutton wrote:
> On Tue, 20 Aug 2019 at 07:47, Daniel Drake wrote:
> >
> > Hi,
> >
> > And if there is a meaningful way to make the kernel behave better, that
> > would
> > obviously be of huge value too.
> >
> > Thanks
> > Daniel
>
> Hi,
>
> Is there a w
On Fri, Aug 23, 2019 at 9:54 AM ndrw wrote:
> That's obviously a lot better than hard freezes but I wouldn't call such
> system lock-ups an excellent result. PSI-triggered OOM killer would have
> indeed been very useful as an emergency brake, and IMHO such mechanism
> should be built in the kernel
On 20/08/2019 07:46, Daniel Drake wrote:
To share our results so far, despite this daemon being a quick initial
implementation, we find that it is bringing excellent results, no more memory
pressure hangs. The system recovers in less than 30 seconds, usually in more
like 10-15 seconds.
That's o
On Tue, 20 Aug 2019 at 07:47, Daniel Drake wrote:
>
> Hi,
>
> And if there is a meaningful way to make the kernel behave better, that would
> obviously be of huge value too.
>
> Thanks
> Daniel
Hi,
Is there a way for an application to be told that there is a memory
pressure situation?
For exampl
Hi,
Artem S. Tashkinov wrote:
> Once you hit a situation when opening a new tab requires more RAM than
> is currently available, the system will stall hard. You will barely be
> able to move the mouse pointer. Your disk LED will be flashing
> incessantly (I'm not entirely sure why). You will not
On 8/9/19 7:31 PM, Johannes Weiner wrote:
>> It made a difference, but not enough, it seems. Before the patch I could
>> observe "io:full avg10" around 75% and "memory:full avg10" around 20%,
>> after the patch, "memory:full avg10" went to around 45%, while io stayed
>> the same (BTW should the ref
On Sat 10-08-19 13:34:06, ndrw wrote:
> On 09/08/2019 11:50, Michal Hocko wrote:
> > We try to protect low amount of cache. Have a look at get_scan_count
> > function. But the exact amount of the cache to be protected is really
> > hard to know wihtout a crystal ball or understanding of the workloa
On 09/08/2019 09:57, Michal Hocko wrote:
This is a useful feedback! What was your workload? Which kernel version?
With 16GB zram swap and swappiness=60 I get the avg10 memory PSI numbers
of about 10 when swap is half filled and ~30 immediately before the
freeze. Swapping with zram has less ef
On 09/08/2019 11:50, Michal Hocko wrote:
We try to protect low amount of cache. Have a look at get_scan_count
function. But the exact amount of the cache to be protected is really
hard to know wihtout a crystal ball or understanding of the workload.
The kernel doesn't have neither of the two.
T
On Fri, Aug 09, 2019 at 04:56:28PM +0200, Vlastimil Babka wrote:
> On 8/8/19 7:27 PM, Johannes Weiner wrote:
> > On Thu, Aug 08, 2019 at 04:47:18PM +0200, Vlastimil Babka wrote:
> >> On 8/7/19 10:51 PM, Johannes Weiner wrote:
> >>> From 9efda85451062dea4ea287a886e515efefeb1545 Mon Sep 17 00:00:00 2
On 8/8/19 7:27 PM, Johannes Weiner wrote:
> On Thu, Aug 08, 2019 at 04:47:18PM +0200, Vlastimil Babka wrote:
>> On 8/7/19 10:51 PM, Johannes Weiner wrote:
>>> From 9efda85451062dea4ea287a886e515efefeb1545 Mon Sep 17 00:00:00 2001
>>> From: Johannes Weiner
>>> Date: Mon, 5 Aug 2019 13:15:16 -0400
>
[...]
Hi,
This is an interesting topic for me so I would like to join the conversation.
I will be glad if I can be of any help here either in testing PSI, or
verifying some scenarios and observation.
I have some experience working with low memory embedded devices, like
RAM as low as 128MB, 256MB
On Fri 09-08-19 11:09:33, ndrw wrote:
> On 09/08/2019 09:57, Michal Hocko wrote:
> > We already do have a reserve (min_free_kbytes). That gives kswapd some
> > room to perform reclaim in the background without obvious latencies to
> > allocating tasks (well CPU still be used so there is still some
On 09/08/2019 09:57, Michal Hocko wrote:
We already do have a reserve (min_free_kbytes). That gives kswapd some
room to perform reclaim in the background without obvious latencies to
allocating tasks (well CPU still be used so there is still some effect).
I tried this option in the past. Unfort
On Thu 08-08-19 22:59:32, ndrw wrote:
> On 08/08/2019 19:59, Michal Hocko wrote:
> > Well, I am afraid that implementing anything like that in the kernel
> > will lead to many regressions and bug reports. People tend to have very
> > different opinions on when it is suitable to kill a potentially
>
On 08/08/2019 19:59, Michal Hocko wrote:
Well, I am afraid that implementing anything like that in the kernel
will lead to many regressions and bug reports. People tend to have very
different opinions on when it is suitable to kill a potentially
important part of a workload just because memory ge
On Thu 08-08-19 18:57:02, ndrw...@redhazel.co.uk wrote:
>
>
> On 8 August 2019 17:32:28 BST, Michal Hocko wrote:
> >
> >> Would it be possible to reserve a fixed (configurable) amount of RAM
> >for caches,
> >
> >I am afraid there is nothing like that available and I would even argue
> >it doesn
On 8 August 2019 17:32:28 BST, Michal Hocko wrote:
>
>> Would it be possible to reserve a fixed (configurable) amount of RAM
>for caches,
>
>I am afraid there is nothing like that available and I would even argue
>it doesn't make much sense either. What would you consider to be a
>cache? A kern
On Thu, Aug 08, 2019 at 04:47:18PM +0200, Vlastimil Babka wrote:
> On 8/7/19 10:51 PM, Johannes Weiner wrote:
> > From 9efda85451062dea4ea287a886e515efefeb1545 Mon Sep 17 00:00:00 2001
> > From: Johannes Weiner
> > Date: Mon, 5 Aug 2019 13:15:16 -0400
> > Subject: [PATCH] psi: trigger the OOM kill
On Thu 08-08-19 16:10:07, ndrw...@redhazel.co.uk wrote:
>
>
> On 8 August 2019 12:48:26 BST, Michal Hocko wrote:
> >>
> >> Per default, the OOM killer will engage after 15 seconds of at least
> >> 80% memory pressure. These values are tunable via sysctls
> >> vm.thrashing_oom_period and vm.thra
On 8 August 2019 12:48:26 BST, Michal Hocko wrote:
>>
>> Per default, the OOM killer will engage after 15 seconds of at least
>> 80% memory pressure. These values are tunable via sysctls
>> vm.thrashing_oom_period and vm.thrashing_oom_level.
>
>As I've said earlier I would be somehow more comf
On 8/7/19 10:51 PM, Johannes Weiner wrote:
> From 9efda85451062dea4ea287a886e515efefeb1545 Mon Sep 17 00:00:00 2001
> From: Johannes Weiner
> Date: Mon, 5 Aug 2019 13:15:16 -0400
> Subject: [PATCH] psi: trigger the OOM killer on severe thrashing
Thanks a lot, perhaps finally we are going to eat t
On Wed 07-08-19 16:51:38, Johannes Weiner wrote:
[...]
> >From 9efda85451062dea4ea287a886e515efefeb1545 Mon Sep 17 00:00:00 2001
> From: Johannes Weiner
> Date: Mon, 5 Aug 2019 13:15:16 -0400
> Subject: [PATCH] psi: trigger the OOM killer on severe thrashing
>
> Over the last few years we have ha
On Wed, Aug 07, 2019 at 02:01:30PM -0700, Andrew Morton wrote:
> On Wed, 7 Aug 2019 16:51:38 -0400 Johannes Weiner wrote:
>
> > However, eb414681d5a0 ("psi: pressure stall information for CPU,
> > memory, and IO") introduced a memory pressure metric that quantifies
> > the share of wallclock time
On Wed, Aug 07, 2019 at 04:51:42PM -0400, Johannes Weiner wrote:
> Per default, the OOM killer will engage after 15 seconds of at least
> 80% memory pressure. These values are tunable via sysctls
> vm.thrashing_oom_period and vm.thrashing_oom_level.
Let's go with this:
Per default, the OOM killer
On Wed, 7 Aug 2019 16:51:38 -0400 Johannes Weiner wrote:
> However, eb414681d5a0 ("psi: pressure stall information for CPU,
> memory, and IO") introduced a memory pressure metric that quantifies
> the share of wallclock time in which userspace waits on reclaim,
> refaults, swapins. By using absol
On Wed, Aug 07, 2019 at 09:59:27AM +0200, Michal Hocko wrote:
> On Tue 06-08-19 18:01:50, Johannes Weiner wrote:
> > On Tue, Aug 06, 2019 at 09:27:05AM -0700, Suren Baghdasaryan wrote:
> [...]
> > > > > I'm not sure 10s is the perfect value here, but I do think the kernel
> > > > > should try to ge
On Tue 06-08-19 18:01:50, Johannes Weiner wrote:
> On Tue, Aug 06, 2019 at 09:27:05AM -0700, Suren Baghdasaryan wrote:
[...]
> > > > I'm not sure 10s is the perfect value here, but I do think the kernel
> > > > should try to get out of such a state, where interacting with the
> > > > system is impo
On Tue, Aug 06, 2019 at 09:27:05AM -0700, Suren Baghdasaryan wrote:
> On Tue, Aug 6, 2019 at 7:36 AM Michal Hocko wrote:
> >
> > On Tue 06-08-19 10:27:28, Johannes Weiner wrote:
> > > On Tue, Aug 06, 2019 at 11:36:48AM +0200, Vlastimil Babka wrote:
> > > > On 8/6/19 3:08 AM, Suren Baghdasaryan wro
On Tue, 6 Aug 2019 at 02:09, Suren Baghdasaryan wrote:
>
> 80% of the last 10 seconds spent in full stall would definitely be a
> problem. If the system was already low on memory (which it probably
> is, or we would not be reclaiming so hard and registering such a big
> stall) then oom-killer woul
Sorry, I don't have the original message to reply to.. But to those
interested, I have found a solution to the kernel's complete inability
to allocate more memory when it needs to swap out.
Increase the /proc/sys/vm/watermark_scale_factor from the default 10 to 500
It will make a huge difference,
* Artem S. Tashkinov:
> There's this bug which has been bugging many people for many years
> already and which is reproducible in less than a few minutes under the
> latest and greatest kernel, 5.2.6. All the kernel parameters are set to
> defaults.
>
> Steps to reproduce:
>
> 1) Boot with mem=4G
On Tue, Aug 6, 2019 at 7:36 AM Michal Hocko wrote:
>
> On Tue 06-08-19 10:27:28, Johannes Weiner wrote:
> > On Tue, Aug 06, 2019 at 11:36:48AM +0200, Vlastimil Babka wrote:
> > > On 8/6/19 3:08 AM, Suren Baghdasaryan wrote:
> > > >> @@ -1280,3 +1285,50 @@ static int __init psi_proc_init(void)
> >
On Tue 06-08-19 10:27:28, Johannes Weiner wrote:
> On Tue, Aug 06, 2019 at 11:36:48AM +0200, Vlastimil Babka wrote:
> > On 8/6/19 3:08 AM, Suren Baghdasaryan wrote:
> > >> @@ -1280,3 +1285,50 @@ static int __init psi_proc_init(void)
> > >> return 0;
> > >> }
> > >> module_init(psi_proc_in
On Tue, Aug 06, 2019 at 11:36:48AM +0200, Vlastimil Babka wrote:
> On 8/6/19 3:08 AM, Suren Baghdasaryan wrote:
> >> @@ -1280,3 +1285,50 @@ static int __init psi_proc_init(void)
> >> return 0;
> >> }
> >> module_init(psi_proc_init);
> >> +
> >> +#define OOM_PRESSURE_LEVEL 80
> >> +#de
On 8/6/19 3:08 AM, Suren Baghdasaryan wrote:
>> @@ -1280,3 +1285,50 @@ static int __init psi_proc_init(void)
>> return 0;
>> }
>> module_init(psi_proc_init);
>> +
>> +#define OOM_PRESSURE_LEVEL 80
>> +#define OOM_PRESSURE_PERIOD(10 * NSEC_PER_SEC)
>
> 80% of the last 10 seconds s
On Mon 05-08-19 14:55:42, Johannes Weiner wrote:
> On Mon, Aug 05, 2019 at 03:31:19PM +0200, Michal Hocko wrote:
> > On Mon 05-08-19 14:13:16, Vlastimil Babka wrote:
> > > On 8/4/19 11:23 AM, Artem S. Tashkinov wrote:
> > > > Hello,
> > > >
> > > > There's this bug which has been bugging many peop
> On Mon, Aug 5, 2019 at 12:31 PM Johannes Weiner wrote:
>>
>> On Mon, Aug 05, 2019 at 02:13:16PM +0200, Vlastimil Babka wrote:
>> > On 8/4/19 11:23 AM, Artem S. Tashkinov wrote:
>> > > Hello,
>> > >
>> > > There's this bug which has been bugging many people for many years
>> > > already and which
On Mon, Aug 5, 2019 at 12:31 PM Johannes Weiner wrote:
>
> On Mon, Aug 05, 2019 at 02:13:16PM +0200, Vlastimil Babka wrote:
> > On 8/4/19 11:23 AM, Artem S. Tashkinov wrote:
> > > Hello,
> > >
> > > There's this bug which has been bugging many people for many years
> > > already and which is repro
On Mon, Aug 05, 2019 at 02:13:16PM +0200, Vlastimil Babka wrote:
> On 8/4/19 11:23 AM, Artem S. Tashkinov wrote:
> > Hello,
> >
> > There's this bug which has been bugging many people for many years
> > already and which is reproducible in less than a few minutes under the
> > latest and greatest
On Mon, Aug 05, 2019 at 03:31:19PM +0200, Michal Hocko wrote:
> On Mon 05-08-19 14:13:16, Vlastimil Babka wrote:
> > On 8/4/19 11:23 AM, Artem S. Tashkinov wrote:
> > > Hello,
> > >
> > > There's this bug which has been bugging many people for many years
> > > already and which is reproducible in
On Mon, Aug 5, 2019 at 6:31 AM Michal Hocko wrote:
>
> On Mon 05-08-19 14:13:16, Vlastimil Babka wrote:
> > On 8/4/19 11:23 AM, Artem S. Tashkinov wrote:
> > > Hello,
> > >
> > > There's this bug which has been bugging many people for many years
> > > already and which is reproducible in less than
On Mon 05-08-19 14:13:16, Vlastimil Babka wrote:
> On 8/4/19 11:23 AM, Artem S. Tashkinov wrote:
> > Hello,
> >
> > There's this bug which has been bugging many people for many years
> > already and which is reproducible in less than a few minutes under the
> > latest and greatest kernel, 5.2.6. A
On 8/4/19 11:23 AM, Artem S. Tashkinov wrote:
> Hello,
>
> There's this bug which has been bugging many people for many years
> already and which is reproducible in less than a few minutes under the
> latest and greatest kernel, 5.2.6. All the kernel parameters are set to
> defaults.
>
> Steps to
On 8/5/19 9:05 AM, Hillf Danton wrote:
On Sun, 4 Aug 2019 09:23:17 + "Artem S. Tashkinov" wrote:
Hello,
There's this bug which has been bugging many people for many years
already and which is reproducible in less than a few minutes under the
latest and greatest kernel, 5.2.6. All the kern
Hello,
There's this bug which has been bugging many people for many years
already and which is reproducible in less than a few minutes under the
latest and greatest kernel, 5.2.6. All the kernel parameters are set to
defaults.
Steps to reproduce:
1) Boot with mem=4G
2) Disable swap to make ever
47 matches
Mail list logo