On Mon, Oct 22, 2018 at 1:01 PM Michal Hocko wrote:
>
> On Mon 22-10-18 15:08:22, Roman Gushchin wrote:
> [...]
> > RE backporting: I'm slightly surprised that only one patch of the memcg
> > reclaim fix series has been backported. Either all or none makes much more
> > sense to me.
>
> Yeah, I th
> On Sat 20-10-18 14:41:40, Spock wrote:
> > Hello,
> >
> > I have a workload, which creates lots of cache pages. Before 4.18.15,
> > the behavior was very stable: pagecache is constantly growing until it
> > consumes all the free memory, and then kswapd is balancing it around
> > low watermark. A
On Mon, Oct 22, 2018 at 10:33:22AM +0200, Michal Hocko wrote:
> Cc som more people.
>
> I am wondering why 172b06c32b94 ("mm: slowly shrink slabs with a
> relatively small number of objects") has been backported to the stable
> tree when not marked that way. Put that aside it seems likely that the
On Mon 22-10-18 15:08:22, Roman Gushchin wrote:
[...]
> RE backporting: I'm slightly surprised that only one patch of the memcg
> reclaim fix series has been backported. Either all or none makes much more
> sense to me.
Yeah, I think this is AUTOSEL trying to be clever again. I though it has
been
On Mon, Oct 22, 2018 at 10:33:22AM +0200, Michal Hocko wrote:
> Cc som more people.
>
> I am wondering why 172b06c32b94 ("mm: slowly shrink slabs with a
> relatively small number of objects") has been backported to the stable
> tree when not marked that way. Put that aside it seems likely that the
Cc som more people.
I am wondering why 172b06c32b94 ("mm: slowly shrink slabs with a
relatively small number of objects") has been backported to the stable
tree when not marked that way. Put that aside it seems likely that the
upstream kernel will have the same issue I suspect. Roman, could you
ha
On Sat, Oct 20, 2018 at 08:37:28AM -0700, Randy Dunlap wrote:
> [add linux-mm mailing list + people]
>
>
> On 10/20/18 4:41 AM, Spock wrote:
> > Hello,
> >
> > I have a workload, which creates lots of cache pages. Before 4.18.15,
> > the behavior was very stable: pagecache is constantly growing
[add linux-mm mailing list + people]
On 10/20/18 4:41 AM, Spock wrote:
> Hello,
>
> I have a workload, which creates lots of cache pages. Before 4.18.15,
> the behavior was very stable: pagecache is constantly growing until it
> consumes all the free memory, and then kswapd is balancing it aroun
Neil Horman wrote:
On Sun, Jul 24, 2005 at 10:40:19PM -0300, Márcio Oliveira wrote:
Neil Horman wrote:
On Sat, Jul 23, 2005 at 08:16:20PM -0300, Márcio Oliveira wrote:
Neil,
The best way I can think to do that is take a look at /proc/slabinfo.
That will
like
On Sun, Jul 24, 2005 at 10:40:19PM -0300, Márcio Oliveira wrote:
> Neil Horman wrote:
>
> >On Sat, Jul 23, 2005 at 08:16:20PM -0300, Márcio Oliveira wrote:
> >
> >
> >>Neil,
> >>
> >>
> >>
> >>>The best way I can think to do that is take a look at /proc/slabinfo.
> >>>That will
> >>>likely g
Hello,
At Sun, 24 Jul 2005 22:40:19 -0300,
Márcio Oliveira wrote:
> I'm using the kernel-source package and trying to compiling the
> source (in /usr/src/linux-2.4 directory) with "make config", "make dep",
> "make clean", "make bzImage", "make modules" and "make
> modules_install". I als
Neil Horman wrote:
On Sat, Jul 23, 2005 at 08:16:20PM -0300, Márcio Oliveira wrote:
Neil,
The best way I can think to do that is take a look at /proc/slabinfo.
That will
likely give you a pointer to which area of code is eating up your memory.
OK. I will monitor the /proc/
On Sat, Jul 23, 2005 at 08:16:20PM -0300, Márcio Oliveira wrote:
> Neil,
>
> >The best way I can think to do that is take a look at /proc/slabinfo.
> >That will
> >likely give you a pointer to which area of code is eating up your memory.
> >
> >
> OK. I will monitor the /proc/slabinfo file.
>
Neil,
The best way I can think to do that is take a look at /proc/slabinfo. That will
likely give you a pointer to which area of code is eating up your memory.
OK. I will monitor the /proc/slabinfo file.
Based on the sysrq-m info you posted it looks like due to fragmentation the
largest c
On Fri, Jul 22, 2005 at 08:23:19PM -0300, Márcio Oliveira wrote:
> >
> Roger, thanks for the information.
>
> I'm using Update 4 kernels (2.4.21-27.ELsmp - This kernel have some
> mm / oom fixes) and don't have big problems when create large files,
> plus the server is a 32-bit machine.
>
>
ly 22, 2005 2:42 PM
To: Neil Horman
Cc: [EMAIL PROTECTED]; linux-kernel@vger.kernel.org
Subject: Re: Memory Management
Neil Horman wrote:
On Fri, Jul 22, 2005 at 11:32:52AM -0300, Márcio Oliveira wrote:
Neil Horman wrote:
On Thu, Jul 21, 2005 at 10:40:54AM -0300, M
uly 22, 2005 2:42 PM
> To: Neil Horman
> Cc: [EMAIL PROTECTED]; linux-kernel@vger.kernel.org
> Subject: Re: Memory Management
>
> Neil Horman wrote:
>
> >On Fri, Jul 22, 2005 at 11:32:52AM -0300, Márcio Oliveira wrote:
> >
> >
> >>Neil Horman wrote:
Neil Horman wrote:
On Fri, Jul 22, 2005 at 11:32:52AM -0300, Márcio Oliveira wrote:
Neil Horman wrote:
On Thu, Jul 21, 2005 at 10:40:54AM -0300, Márcio Oliveira wrote:
http://people.redhat.com/nhorman/papers/rhel3_vm.pdf
I wrote this with norm awhile back. It may help you o
On Fri, Jul 22, 2005 at 11:32:52AM -0300, Márcio Oliveira wrote:
> Neil Horman wrote:
>
> >On Thu, Jul 21, 2005 at 10:40:54AM -0300, Márcio Oliveira wrote:
> >
> >
> >>>http://people.redhat.com/nhorman/papers/rhel3_vm.pdf
> >>>I wrote this with norm awhile back. It may help you out.
> >>>Regards
Neil Horman wrote:
On Thu, Jul 21, 2005 at 10:40:54AM -0300, Márcio Oliveira wrote:
http://people.redhat.com/nhorman/papers/rhel3_vm.pdf
I wrote this with norm awhile back. It may help you out.
Regards
Neil
Neil,
Thanks.~10-12GB of total RAM (16GB) are
How can Proc virtual mem
On Thu, Jul 21, 2005 at 10:40:54AM -0300, Márcio Oliveira wrote:
>
> >http://people.redhat.com/nhorman/papers/rhel3_vm.pdf
> >I wrote this with norm awhile back. It may help you out.
> >Regards
> >Neil
> >
> >
> Neil,
>
> Thanks.~10-12GB of total RAM (16GB) are
>
> How can Proc virtual mem
http://people.redhat.com/nhorman/papers/rhel3_vm.pdf
I wrote this with norm awhile back. It may help you out.
Regards
Neil
Neil,
Thanks.~10-12GB of total RAM (16GB) are
How can Proc virtual memory parameters like inactive_clean_percent,
overcommit_memory, overcommit_ratio and page_ca
On Thu, Jul 21, 2005 at 09:34:14AM -0300, Márcio Oliveira wrote:
> Arjan van de Ven wrote:
>
> >On Wed, 2005-07-20 at 11:23 -0300, Márcio Oliveira wrote:
> >
> >
> >>Arjan van de Ven wrote:
> >>
> >>
> >>
> >>>I'm sure RH support will be able to help you with that; I doubt many
> >>>other peop
Arjan van de Ven wrote:
On Wed, 2005-07-20 at 11:23 -0300, Márcio Oliveira wrote:
Arjan van de Ven wrote:
I'm sure RH support will be able to help you with that; I doubt many
other people care about an ancient kernel like that, and a vendor one to
boot.
(Also I assume you are using t
On Wed, 2005-07-20 at 11:23 -0300, Márcio Oliveira wrote:
> Arjan van de Ven wrote:
>
> >I'm sure RH support will be able to help you with that; I doubt many
> >other people care about an ancient kernel like that, and a vendor one to
> >boot.
> >
> >(Also I assume you are using the -hugemem kernel
Arjan van de Ven wrote:
I'm sure RH support will be able to help you with that; I doubt many
other people care about an ancient kernel like that, and a vendor one to
boot.
(Also I assume you are using the -hugemem kernel as the documentation
recommends you to do)
Arjan,
I'd like to know
On Tuesday 22 February 2005 04:57 am, Martin MOKREJŠ wrote:
> The 3GB labeled file corresponds to fast case, 4GB is ugly slow.
> What can you gather from those files?
I did take a look and didn't analyze it further since Andi Mentioned it is a
known BIOS bug.
Sorry about the trouble - didn't imagi
Ingo Molnar wrote:
* Andi Kleen <[EMAIL PROTECTED]> wrote:
Although I've not re-tested this today again, it used to help a bit to specify
mem=3548M to decrease memory used by linux (tested with AGP card plugged in,
when
bios reported 3556MB RAM only).
I found that removing the AGP based videoc
Parag Warudkar wrote:
Hi,
I have received no answer to my former question
(see http://marc.theaimsgroup.com/?l=linux-kernel&m=110827143716215&w=2).
I've spent some more time on that problem and have more or less confirmed
it's because of buggy bios. However, the linux kernel doesn't handle properl
* Andi Kleen <[EMAIL PROTECTED]> wrote:
> > Although I've not re-tested this today again, it used to help a bit to
> > specify
> > mem=3548M to decrease memory used by linux (tested with AGP card plugged
> > in, when
> > bios reported 3556MB RAM only).
> >
> > I found that removing the AGP
Martin MOKREJ© <[EMAIL PROTECTED]> writes:
> Hi,
> I have received no answer to my former question
> (see http://marc.theaimsgroup.com/?l=linux-kernel&m=110827143716215&w=2).
That's because it's a BIOS problem.
There are limits on how much Linux can work around BIOS breakage.
> Although I'
> Hi,
> I have received no answer to my former question
> (see http://marc.theaimsgroup.com/?l=linux-kernel&m=110827143716215&w=2).
> I've spent some more time on that problem and have more or less confirmed
> it's because of buggy bios. However, the linux kernel doesn't handle properly
> such ca
Marcelo Tosatti wrote:
>
> On Wed, 2 May 2001, Jorge Nerin wrote:
>
>> Short version:
>> Under very heavy thrashing (about four hours) the system either lockups
>> or OOM handler kills a task even when there is swap space left.
>
>
> First of all, please try to reproduce the problem with
Marcelo Tosatti wrote:
>
> On Wed, 2 May 2001, Jorge Nerin wrote:
>
>> Short version:
>> Under very heavy thrashing (about four hours) the system either lockups
>> or OOM handler kills a task even when there is swap space left.
>
>
> First of all, please try to reproduce the problem with 2.4
On Wed, 2 May 2001, Jorge Nerin wrote:
> Short version:
> Under very heavy thrashing (about four hours) the system either lockups
> or OOM handler kills a task even when there is swap space left.
First of all, please try to reproduce the problem with 2.4.5-pre1.
If it still happens with pre
>Agreed, that's almost sure _not_ random memory corruption of the page
>structure. It looks like a VM bug (if you can reproduce trivially I'd give
a
>try to test8 too since test8 is rock solid for me while test10 lockups in
VM
>core at the second bonnie if using emulated highmem).
I was lucky. S
On Fri, 17 Nov 2000, Andrea Arcangeli wrote:
> Plus I add that the "if (!order) goto try_again" is an obvious
> deadlock prone bug introduce in test9 that should be removed.
1) how would this cause deadlocks?
2) how would this somehow be worse than the
unconditional 'goto try_again' we had be
On Fri, 17 Nov 2000, Andrea Arcangeli wrote:
> Actually memory balancing in 2.4.x doesn't get any information,
> not even the information about which _classzone_ where to free
> the memory (NOTE: both 2.2.x and 2.0.x _always_ got the
> classzone where to free memory at least). This classzone miss
On Fri, Nov 17, 2000 at 05:35:53PM +0100, [EMAIL PROTECTED] wrote:
> I did a little closer investigation. The BUG was triggered by a page with
> page->mapping pointing to an address space of a mapped ext2 file
> (page->mapping->a_ops == &ext2_aops). The page had PG_locked, PG_uptodate,
> PG_active
On Fri, 17 Nov 2000 [EMAIL PROTECTED] wrote:
>
> >> Whats the reasoning behind these ifs ?
> >
> >To catch memory corruption or things running out of control in the kernel.
> I was refering to the "if (!order) goto try_again" ifs in alloc_pages, not
> the "if (something) BUG()" ifs.
Basically,
>> before I hit yet another BUG in swap_state.c:60.
>
>The bug in swap_state:60 shows a kernel bug in the VM or random memory
>corruption. Make sure you can reproduce on x86 to be sure it's not a s390
>that is randomly corrupting memory. If you read the oops after the BUG
message
>with asm at ha
On Fri, Nov 17, 2000 at 11:41:58AM +0100, [EMAIL PROTECTED] wrote:
> [..] But low on memory
> does mean low on real memory + swap space, doesn't it ? [..]
No. Low on memory here means that `grep MemFree enough swap space but it isn't using any of it when the BUG hits. I think
This is normal.
>
>>
>> If they absolutely needs 4 pages for pmd pagetables due hardware
constraints
>> I'd recommend to use _four_ hardware pages for each softpage, not two.
>
>Yes.
>
>However, it definitely is an issue of making trade-offs. Most 64-bit MMU
>models tend to have some flexibility in how you set up
On Thu, 16 Nov 2000, Andrea Arcangeli wrote:
>
> If they absolutely needs 4 pages for pmd pagetables due hardware constraints
> I'd recommend to use _four_ hardware pages for each softpage, not two.
Yes.
However, it definitely is an issue of making trade-offs. Most 64-bit MMU
models tend to h
On Thu, Nov 16, 2000 at 09:01:07AM -0800, Linus Torvalds wrote:
> "Linux pages" be _two_ hardware pages, and make a Linux pte contain two
If they absolutely needs 4 pages for pmd pagetables due hardware constraints
I'd recommend to use _four_ hardware pages for each softpage, not two.
The issue
On Thu, 16 Nov 2000 [EMAIL PROTECTED] wrote:
>
> Ok, the BUG() hit in get_pmd_slow:
>
> pmd_t *
> get_pmd_slow(pgd_t *pgd, unsigned long offset)
> {
> pmd_t *pmd;
> int i;
>
> pmd = (pmd_t *) __get_free_pages(GFP_KERNEL,2);
You really need 4 pages?
There's no way to
>What happens if you just replace all places that would use a bad page
>table with a BUG()? (Ie do _not_ add the bug to the place where you
>added the test: by that time it's too late. I'm talking about the
>places where the bad page tables are used, like in the error cases of
>"get_pte_kernel_
In article <[EMAIL PROTECTED]>,
>After some trickery with some special hardware feature (storage
>keys) I found out that empty_bad_pmd_table and empty_bad_pte_table have
>been put to the page table quicklists multiple(!) times.
This is definitely bad, and means that something else really
>> +extern pte_t empty_bad_pte_table[];
>> extern __inline__ void free_pte_fast(pte_t *pte)
>> {
>> + if (pte == empty_bad_pte_table)
>> + return;
>
>I guess that should be BUG() instead of return, so that the callers can be
>fixed.
Not really. pte_free and pmd_free are cal
On Wed, Nov 15, 2000 at 01:39:13PM +0100, [EMAIL PROTECTED] wrote:
> +extern pte_t empty_bad_pte_table[];
> extern __inline__ void free_pte_fast(pte_t *pte)
> {
> + if (pte == empty_bad_pte_table)
> + return;
I guess that should be BUG() instead of return, so that the caller
50 matches
Mail list logo