Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-29 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Jan 28, 2021 at 02:20:09PM -0700, Chris Murphy wrote:
> On Thu, Jan 28, 2021 at 2:08 PM Adam Williamson
>  wrote:
> >
> > On Thu, 2021-01-28 at 13:46 -0700, Chris Murphy wrote:
> > >
> > > OK I'm seeing this problem in a VM with
> > > Fedora-Workstation-Live-x86_64-Rawhide-20210128.n.0.iso but I'm not
> > > sure how consistent it is yet. MemTotal is ~3G for a VM that has 4G
> > > allocated. Something's wrong...
> > >
> > > VM 5.11.0-0.rc5.20210127git2ab38c17aac1.136.fc34.x86_64
> > > [0.701792] Memory: 3016992K/4190656K available (43019K kernel
> > > code, 11036K rwdata, 27184K rodata, 5036K init, 31780K bss, 1173408K
> > > reserved, 0K cma-reserved)
> > >
> > > Baremetal 5.10.11-200.fc33.x86_64
> > > [0.125875] Memory: 12059084K/12493424K available (14345K kernel
> > > code, 3465K rwdata, 9704K rodata, 2536K init, 5504K bss, 434080K
> > > reserved, 0K cma-reserved)
> > >
> > > Why is reserved so much higher in the VM case? It clearly sees the 4G
> > > but is delimiting it to 3G for some reason I don't understand. This is
> > > well before the zram module is loaded, by the way.
> >
> > I've filed https://bugzilla.redhat.com/show_bug.cgi?id=1921923 for
> > this. zdzichu suggests https://lkml.org/lkml/2021/1/25/701 may be
> > related.
> 
> I'm not sure, because
> Revert "mm: fix initialization of struct page for holes in memory layout"
> landed in 5.10.11 and I wasn't have any of these memory related issues
> with 5.10.10. I'm only seeing this so far with the debug kernels. Even
> rc5 nodebug doesn't exhibit the problem.

From https://lkml.org/lkml/2021/1/26/1215:
> I ran just the revert of bde9cfa3afe4 through CI twice, on both
> occasions all machines failed to boot. 

It seems that the revert is not enough. But it seems at this point
that this is some kernel regression. I'll keep the fraxion bump in
zram-generator-defaults for now.

Zbyszek
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-29 Thread Florian Weimer
* Alexander Bokovoy:

> This is a good note. If zram breaks kernel API promise to user space
> (/proc/meminfo is one such API), how can it be enabled by default. I
> also would question enabling zram by default if it does not play along
> with cgroups. We do depend on cgroups being properly managed by systemd,
> including resource allocation.

But that's impossible: The existing interfaces assume that there's no
RAM compression (or tiers of swap), so something has to give.  As these
reported numbers are used for auto-sizing heaps and caches, there have
to be heuristics that happen to work for the majority of cases.

(Similar to what file systems do if they allocate inodes dynamically,
but still have to synthesize a reasonable-looking maximum to satisfy the
POSIX statvfs interface constraints.)

The alternative would be to come up with entirely new interfaces.  The
container side of things did that, and from that perspective, anything
reading /proc/meminfo is already broken and needs to transition to the
new interfaces.  But that 

Thanks,
Florian
-- 
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-28 Thread Chris Murphy
On Sat, Jan 23, 2021 at 4:29 AM Zbigniew Jędrzejewski-Szmek
 wrote:
>
> (One possible direction: one thing I want to explore next is using zram
> or zwap based on whether the machine has a physical swap device. Maybe
> such a language would be useful then — with additional variables
> specifying e.g. the physical swap size…)

What about setting vm.swappiness = 120?

When set to 100, the bias for reclaiming anonymous pages and file
pages is about equal. Setting it lower is predicated on (a) older
kernels and (b) spinning drives where the cost for page out and page
in is higher than dropping a file page and only reading it back in.
With zram based swap, eviction and reclaim of anon pages is
unquestionably a lot cheaper now, and even cheaper than the cost of
reading in a file page. I'm even thinking it could be pushed higher
than 120. I don't think there's a way to make this smart enough to
scale this to the swap backing storage performance, which is what we
really want. Hence 120 is a compromise in case there's also disk based
swap.

A down the road enhancement might be, if no disk based swap is
detected, push this to 190. This would also allow some time to get
some feedback with it set to 120 before pushing harder.


-- 
Chris Murphy
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-28 Thread Chris Murphy
On Thu, Jan 28, 2021 at 2:08 PM Adam Williamson
 wrote:
>
> On Thu, 2021-01-28 at 13:46 -0700, Chris Murphy wrote:
> >
> > OK I'm seeing this problem in a VM with
> > Fedora-Workstation-Live-x86_64-Rawhide-20210128.n.0.iso but I'm not
> > sure how consistent it is yet. MemTotal is ~3G for a VM that has 4G
> > allocated. Something's wrong...
> >
> > VM 5.11.0-0.rc5.20210127git2ab38c17aac1.136.fc34.x86_64
> > [0.701792] Memory: 3016992K/4190656K available (43019K kernel
> > code, 11036K rwdata, 27184K rodata, 5036K init, 31780K bss, 1173408K
> > reserved, 0K cma-reserved)
> >
> > Baremetal 5.10.11-200.fc33.x86_64
> > [0.125875] Memory: 12059084K/12493424K available (14345K kernel
> > code, 3465K rwdata, 9704K rodata, 2536K init, 5504K bss, 434080K
> > reserved, 0K cma-reserved)
> >
> > Why is reserved so much higher in the VM case? It clearly sees the 4G
> > but is delimiting it to 3G for some reason I don't understand. This is
> > well before the zram module is loaded, by the way.
>
> I've filed https://bugzilla.redhat.com/show_bug.cgi?id=1921923 for
> this. zdzichu suggests https://lkml.org/lkml/2021/1/25/701 may be
> related.

I'm not sure, because
Revert "mm: fix initialization of struct page for holes in memory layout"
landed in 5.10.11 and I wasn't have any of these memory related issues
with 5.10.10. I'm only seeing this so far with the debug kernels. Even
rc5 nodebug doesn't exhibit the problem.

-- 
Chris Murphy
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-28 Thread Adam Williamson
On Thu, 2021-01-28 at 13:46 -0700, Chris Murphy wrote:
> 
> OK I'm seeing this problem in a VM with
> Fedora-Workstation-Live-x86_64-Rawhide-20210128.n.0.iso but I'm not
> sure how consistent it is yet. MemTotal is ~3G for a VM that has 4G
> allocated. Something's wrong...
> 
> VM 5.11.0-0.rc5.20210127git2ab38c17aac1.136.fc34.x86_64
> [0.701792] Memory: 3016992K/4190656K available (43019K kernel
> code, 11036K rwdata, 27184K rodata, 5036K init, 31780K bss, 1173408K
> reserved, 0K cma-reserved)
> 
> Baremetal 5.10.11-200.fc33.x86_64
> [0.125875] Memory: 12059084K/12493424K available (14345K kernel
> code, 3465K rwdata, 9704K rodata, 2536K init, 5504K bss, 434080K
> reserved, 0K cma-reserved)
> 
> Why is reserved so much higher in the VM case? It clearly sees the 4G
> but is delimiting it to 3G for some reason I don't understand. This is
> well before the zram module is loaded, by the way.

I've filed https://bugzilla.redhat.com/show_bug.cgi?id=1921923 for
this. zdzichu suggests https://lkml.org/lkml/2021/1/25/701 may be
related.
-- 
Adam Williamson
Fedora QA
IRC: adamw | Twitter: adamw_ha
https://www.happyassassin.net


___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-28 Thread Chris Murphy
On Thu, Jan 28, 2021 at 1:46 PM Chris Murphy  wrote:
>
> VM 5.11.0-0.rc5.20210127git2ab38c17aac1.136.fc34.x86_64
> [0.701792] Memory: 3016992K/4190656K available (43019K kernel
> code, 11036K rwdata, 27184K rodata, 5036K init, 31780K bss, 1173408K
> reserved, 0K cma-reserved)
>
> Baremetal 5.10.11-200.fc33.x86_64
> [0.125875] Memory: 12059084K/12493424K available (14345K kernel
> code, 3465K rwdata, 9704K rodata, 2536K init, 5504K bss, 434080K
> reserved, 0K cma-reserved)


OK the problem is happening whenever I boot a Fedora 5.11 debug
kernel, going back to at least rc3. If it's not a debug kernel, the
problem doesn't happen.


--
Chris Murphy
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-28 Thread Chris Murphy
On Thu, Jan 28, 2021 at 12:34 PM Alexander Bokovoy  wrote:
>
> On to, 28 tammi 2021, Chris Murphy wrote:
> >> >> > ram + zram + in-memory-zwap in the check.
> >> >>
> >> >> For bare metal IPA uses the python3-psutil call:
> >> >> psutil.virtual_memory.available()
> >> >>
> >> >> I don't know how/if psutil reports zram (or cgroup v1 and v2 for that
> >> >> matter).
> >> >
> >> > psutil (in general) reports data from /proc/meminfo; available come
> >> > from MemAvailable: in that file. This is defined in kernel as:
> >> >
> >> >MemAvailable: An estimate of how much memory is available for starting new
> >> >  applications, without swapping. Calculated from MemFree,
> >> >  SReclaimable, the size of the file LRU lists, and the low
> >> >  watermarks in each zone.
> >> >  The estimate takes into account that the system needs some
> >> >  page cache to function well, and that not all reclaimable
> >> >  slab will be reclaimable, due to items being in use. The
> >> >  impact of those factors will vary from system to system.
> >> >
> >> >Notice "without swapping" in second line.  Next question, how zram impacts
> >> >reporting of MemAvailable by kernel?
> >>
> >> This is a good note. If zram breaks kernel API promise to user space
> >> (/proc/meminfo is one such API), how can it be enabled by default. I
> >> also would question enabling zram by default if it does not play along
> >> with cgroups. We do depend on cgroups being properly managed by systemd,
> >> including resource allocation.
> >>
> >> In my opinion, zram enablement in Fedora is quite premature.
> >>
> >
> >
> >It's the default Fedora wide since Fedora 33. It's been used by default in
> >Fedora IoT since the beginning, and in openQA Anaconda tests for even
> >longer than that.
> >
> >What's premature about it?
>
> I tried to point to my line of thought in the sentence above you quoted.
> You might think that is irrelevant which thought I'd accept as an
> argument and we can agree to disagree.

Speculation is not an adequate explanation for calling the feature premature.

/proc/meminfo MemTotal:   12158520 kB

With no zram device versus a zram device sized 1:1 that of MemTotal

MemAvailable:   11367156 kB
MemAvailable:   11309564 kB

And

CommitLimit: 6079260 kB
CommitLimit:18237208 kB

You can test this as easily as I can via 'systemctl start/stop
swap-create@zram0' and see if it misaligns with expectations.

I'm ignoring the cgroups complaint because there are other things that
also don't work with cgroups ressource control that we're using in
Fedora by default and I don't feel like beating up on those things,
because while suboptimal it's also off topic for this problem.


> Back to this subthread's topic. Looks like Adam found that something
> did reduce a memory available to the system after standard install process
> between Jan 24th and Jan 27th. Something did allocate ~120MB of RAM more
> than it did previously on Fedora Server but also the kernel reports
> ~600MB less RAM available even though in both cases QEMU was configured
> with 2048MB RAM.

OK I'm seeing this problem in a VM with
Fedora-Workstation-Live-x86_64-Rawhide-20210128.n.0.iso but I'm not
sure how consistent it is yet. MemTotal is ~3G for a VM that has 4G
allocated. Something's wrong...

VM 5.11.0-0.rc5.20210127git2ab38c17aac1.136.fc34.x86_64
[0.701792] Memory: 3016992K/4190656K available (43019K kernel
code, 11036K rwdata, 27184K rodata, 5036K init, 31780K bss, 1173408K
reserved, 0K cma-reserved)

Baremetal 5.10.11-200.fc33.x86_64
[0.125875] Memory: 12059084K/12493424K available (14345K kernel
code, 3465K rwdata, 9704K rodata, 2536K init, 5504K bss, 434080K
reserved, 0K cma-reserved)

Why is reserved so much higher in the VM case? It clearly sees the 4G
but is delimiting it to 3G for some reason I don't understand. This is
well before the zram module is loaded, by the way.


-- 
Chris Murphy
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-28 Thread Chris Murphy
On Thu, Jan 28, 2021 at 11:52 AM Adam Williamson
 wrote:
>
> Probably relevant - we log the output of `free` shortly after system
> install. Up to and including Fedora-Rawhide-20210124.n.0 , it looked
> approximately like this:
>
>   totalusedfree  shared  buff/cache   
> available
> Mem:2024132  189668 16094484104  225016 
> 1684936
> Swap:   1011708   0 1011708
>
> Particularly, "total" memory was reported as around 200 bytes. In
> Fedora-Rawhide-20210127.n.1 and Fedora-Rawhide-20210128.n.0, though -
> the two most recent composes - it looks like this:
>
>   totalusedfree  shared  buff/cache   
> available
> Mem:1417856  311908  8528324104  253116  
> 944752
> Swap:   1417212   0 1417212
>
> "total" memory is now reported as around 140 bytes. Not sure what
> caused this to change.

I've seen this in the last few weeks but I haven't figured out the
pattern. A test system with 12G RAM was transiently reporting 8.5G RAM
(using the free command). That is the same fractional difference as
you're reporting above - 70% less total memory. That could be a kernel
bug.

The zram-generator sets the zram device size as a fraction of total
memory, it doesn't have a way to affect total memory.


-- 
Chris Murphy
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-28 Thread Alexander Bokovoy

On to, 28 tammi 2021, Chris Murphy wrote:

>> > ram + zram + in-memory-zwap in the check.
>>
>> For bare metal IPA uses the python3-psutil call:
>> psutil.virtual_memory.available()
>>
>> I don't know how/if psutil reports zram (or cgroup v1 and v2 for that
>> matter).
>
> psutil (in general) reports data from /proc/meminfo; available come
> from MemAvailable: in that file. This is defined in kernel as:
>
>MemAvailable: An estimate of how much memory is available for starting new
>  applications, without swapping. Calculated from MemFree,
>  SReclaimable, the size of the file LRU lists, and the low
>  watermarks in each zone.
>  The estimate takes into account that the system needs some
>  page cache to function well, and that not all reclaimable
>  slab will be reclaimable, due to items being in use. The
>  impact of those factors will vary from system to system.
>
>Notice "without swapping" in second line.  Next question, how zram impacts
>reporting of MemAvailable by kernel?

This is a good note. If zram breaks kernel API promise to user space
(/proc/meminfo is one such API), how can it be enabled by default. I
also would question enabling zram by default if it does not play along
with cgroups. We do depend on cgroups being properly managed by systemd,
including resource allocation.

In my opinion, zram enablement in Fedora is quite premature.




It's the default Fedora wide since Fedora 33. It's been used by default in
Fedora IoT since the beginning, and in openQA Anaconda tests for even
longer than that.

What's premature about it?


I tried to point to my line of thought in the sentence above you quoted.
You might think that is irrelevant which thought I'd accept as an
argument and we can agree to disagree.

Back to this subthread's topic. Looks like Adam found that something
did reduce a memory available to the system after standard install process
between Jan 24th and Jan 27th. Something did allocate ~120MB of RAM more
than it did previously on Fedora Server but also the kernel reports
~600MB less RAM available even though in both cases QEMU was configured
with 2048MB RAM.


--
/ Alexander Bokovoy
Sr. Principal Software Engineer
Security / Identity Management Engineering
Red Hat Limited, Finland
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-28 Thread Chris Murphy
On Thu, Jan 28, 2021, 11:11 AM Alexander Bokovoy 
wrote:

> On to, 28 tammi 2021, Tomasz Torcz wrote:
> >On Thu, Jan 28, 2021 at 11:04:34AM -0500, Rob Crittenden wrote:
> >> Zbigniew Jędrzejewski-Szmek wrote:
> >> > On Thu, Jan 28, 2021 at 03:20:38PM +0200, Alexander Bokovoy wrote:
> >> >> With today's OpenQA tests I can point out that using zram on 2048MB
> RAM
> >> >> VMs actually breaks FreeIPA deployment:
> >> >>
> https://openqa.fedoraproject.org/tests/763006#step/role_deploy_domain_controller/35
> >> >>
> >> >> OpenQA uses 2048MB RAM for QEMU VMs and this was typically OK for
> >> >> FreeIPA deployment with integrated CA and DNS server. Not anymore
> with
> >> >> zram activated:
> >> >>
> >> >> Jan 27 21:17:47 fedora zram_generator::generator[25243]: Creating
> unit dev-zram0.swap (/dev/zram0 with 1384MB)
> >> >>
> >> >> which ends up eating 2/3rds of the whole memory budget and FreeIPA
> >> >> installer fails:
> >> >>
> >> >> 2021-01-28T02:18:31Z DEBUG ipa-server-install was invoked with
> arguments [] and options: {'unattended': True, 'ip_addresses': None,
> 'domain_name': 'test.openqa.fedoraproject.org', 'realm_name': '
> TEST.OPENQA.FEDORAPROJECT.ORG', 'host_name': None, 'ca_cert
> >> >> 2021-01-28T02:18:31Z DEBUG IPA version 4.9.1-1.fc34
> >> >> 2021-01-28T02:18:31Z DEBUG IPA platform fedora
> >> >> 2021-01-28T02:18:31Z DEBUG IPA os-release Fedora 34 (Server Edition
> Prerelease)
> >> >> 2021-01-28T02:18:31Z DEBUG Available memory is 823529472B
> >> >> ...
> >> >> 2021-01-28T02:18:31Z DEBUG The ipa-server-install command failed,
> exception: ScriptError: Less than the minimum 1.2GB of RAM is available,
> 0.77GB available
> >> >> 2021-01-28T02:18:31Z ERROR Less than the minimum 1.2GB of RAM is
> available, 0.77GB available
> >> >> 2021-01-28T02:18:31Z ERROR The ipa-server-install command failed.
> See /var/log/ipaserver-install.log for more information
> >> >
> >> > Enabling zram doesn't really "take away memory", because no
> pre-allocation happens.
> >> > If there is no physical swap, then adding zram0 should just shown
> additional
> >> > swap space, so I don't think it could cause the check to fail.
> >> > But if there is physical swap, the zram device is used with higher
> preference
> >> > than the physical swap. So I think the explanation could be that the
> VM has
> >> > a swap partition. Before, some pages would be swapped out to zram,
> and some would
> >> > be swapped out to the "real" swap. The fraction of RAM used for
> compressed zram
> >> > would be on the order of 25% (zram-fraction=0.5 multiplied by typical
> compression 2:1).
> >> >
> >> > But now the kernel sees more zram swap, so it inserts pages there,
> taking away
> >> > more of RAM, instead of saving pages to disk. So more memory (maybe
> 50% RAM) is
> >> > used for the unswappable compressed pages. But this shouldn't break
> things:
> >> > if there is enough pressure, pages would be swapped out to the
> physical swap device
> >> > too.
> >> >
> >> > Assuming that this guess is correct, the check that
> ipa-server-install is
> >> > doing should be adjusted. It should use the total available memory
> (ram + all kinds
> >> > of swap) in the check, and not just available uncompressed pages.
> >> > Or if it wants to ignore disk-based swap for some reason, it should
> use
> >> > ram + zram + in-memory-zwap in the check.
> >>
> >> For bare metal IPA uses the python3-psutil call:
> >> psutil.virtual_memory.available()
> >>
> >> I don't know how/if psutil reports zram (or cgroup v1 and v2 for that
> >> matter).
> >
> > psutil (in general) reports data from /proc/meminfo; available come
> > from MemAvailable: in that file. This is defined in kernel as:
> >
> >MemAvailable: An estimate of how much memory is available for starting new
> >  applications, without swapping. Calculated from MemFree,
> >  SReclaimable, the size of the file LRU lists, and the low
> >  watermarks in each zone.
> >  The estimate takes into account that the system needs some
> >  page cache to function well, and that not all reclaimable
> >  slab will be reclaimable, due to items being in use. The
> >  impact of those factors will vary from system to system.
> >
> >Notice "without swapping" in second line.  Next question, how zram impacts
> >reporting of MemAvailable by kernel?
>
> This is a good note. If zram breaks kernel API promise to user space
> (/proc/meminfo is one such API), how can it be enabled by default. I
> also would question enabling zram by default if it does not play along
> with cgroups. We do depend on cgroups being properly managed by systemd,
> including resource allocation.
>
> In my opinion, zram enablement in Fedora is quite premature.
>


It's the default Fedora wide since Fedora 33. It's been used by default in
Fedora IoT since the beginning, and in openQA Anaconda tests for even
longer than that.

What's premature about it?


Chris Murphy

Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-28 Thread Adam Williamson
On Thu, 2021-01-28 at 10:18 -0800, Adam Williamson wrote:
> On Thu, 2021-01-28 at 14:16 +, Zbigniew Jędrzejewski-Szmek wrote:
> > On Thu, Jan 28, 2021 at 03:20:38PM +0200, Alexander Bokovoy wrote:
> > > With today's OpenQA tests I can point out that using zram on 2048MB RAM
> > > VMs actually breaks FreeIPA deployment:
> > > https://openqa.fedoraproject.org/tests/763006#step/role_deploy_domain_controller/35
> > > 
> > > OpenQA uses 2048MB RAM for QEMU VMs and this was typically OK for
> > > FreeIPA deployment with integrated CA and DNS server. Not anymore with
> > > zram activated:
> > > 
> > > Jan 27 21:17:47 fedora zram_generator::generator[25243]: Creating unit 
> > > dev-zram0.swap (/dev/zram0 with 1384MB)
> > > 
> > > which ends up eating 2/3rds of the whole memory budget and FreeIPA
> > > installer fails:
> > > 
> > > 2021-01-28T02:18:31Z DEBUG ipa-server-install was invoked with arguments 
> > > [] and options: {'unattended': True, 'ip_addresses': None, 'domain_name': 
> > > 'test.openqa.fedoraproject.org', 'realm_name': 
> > > 'TEST.OPENQA.FEDORAPROJECT.ORG', 'host_name': None, 'ca_cert
> > > 2021-01-28T02:18:31Z DEBUG IPA version 4.9.1-1.fc34
> > > 2021-01-28T02:18:31Z DEBUG IPA platform fedora
> > > 2021-01-28T02:18:31Z DEBUG IPA os-release Fedora 34 (Server Edition 
> > > Prerelease)
> > > 2021-01-28T02:18:31Z DEBUG Available memory is 823529472B
> > > ...
> > > 2021-01-28T02:18:31Z DEBUG The ipa-server-install command failed, 
> > > exception: ScriptError: Less than the minimum 1.2GB of RAM is available, 
> > > 0.77GB available
> > > 2021-01-28T02:18:31Z ERROR Less than the minimum 1.2GB of RAM is 
> > > available, 0.77GB available
> > > 2021-01-28T02:18:31Z ERROR The ipa-server-install command failed. See 
> > > /var/log/ipaserver-install.log for more information
> > 
> > Enabling zram doesn't really "take away memory", because no pre-allocation 
> > happens.
> > If there is no physical swap, then adding zram0 should just shown additional
> > swap space, so I don't think it could cause the check to fail.
> > But if there is physical swap, the zram device is used with higher 
> > preference
> > than the physical swap. So I think the explanation could be that the VM has
> > a swap partition.
> 
> The openQA test in question runs after, and uses the hard disk from, a
> test that runs a default Fedora Server install:
> https://openqa.fedoraproject.org/tests/763650
> which, AIUI, should not be creating a swap partition. The logs from the test -
> https://openqa.fedoraproject.org/tests/763657/file/role_deploy_domain_controller-var_log.tar.gz
>  - 
> do not show any swaps being activated other than zram ones. So no, I
> don't think there is a swap partition.

Probably relevant - we log the output of `free` shortly after system
install. Up to and including Fedora-Rawhide-20210124.n.0 , it looked
approximately like this:

  totalusedfree  shared  buff/cache   available
Mem:2024132  189668 16094484104  225016 1684936
Swap:   1011708   0 1011708

Particularly, "total" memory was reported as around 200 bytes. In
Fedora-Rawhide-20210127.n.1 and Fedora-Rawhide-20210128.n.0, though -
the two most recent composes - it looks like this:

  totalusedfree  shared  buff/cache   available
Mem:1417856  311908  8528324104  253116  944752
Swap:   1417212   0 1417212

"total" memory is now reported as around 140 bytes. Not sure what
caused this to change.
-- 
Adam Williamson
Fedora QA
IRC: adamw | Twitter: adamw_ha
https://www.happyassassin.net


___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-28 Thread Adam Williamson
On Thu, 2021-01-28 at 14:16 +, Zbigniew Jędrzejewski-Szmek wrote:
> On Thu, Jan 28, 2021 at 03:20:38PM +0200, Alexander Bokovoy wrote:
> > With today's OpenQA tests I can point out that using zram on 2048MB RAM
> > VMs actually breaks FreeIPA deployment:
> > https://openqa.fedoraproject.org/tests/763006#step/role_deploy_domain_controller/35
> > 
> > OpenQA uses 2048MB RAM for QEMU VMs and this was typically OK for
> > FreeIPA deployment with integrated CA and DNS server. Not anymore with
> > zram activated:
> > 
> > Jan 27 21:17:47 fedora zram_generator::generator[25243]: Creating unit 
> > dev-zram0.swap (/dev/zram0 with 1384MB)
> > 
> > which ends up eating 2/3rds of the whole memory budget and FreeIPA
> > installer fails:
> > 
> > 2021-01-28T02:18:31Z DEBUG ipa-server-install was invoked with arguments [] 
> > and options: {'unattended': True, 'ip_addresses': None, 'domain_name': 
> > 'test.openqa.fedoraproject.org', 'realm_name': 
> > 'TEST.OPENQA.FEDORAPROJECT.ORG', 'host_name': None, 'ca_cert
> > 2021-01-28T02:18:31Z DEBUG IPA version 4.9.1-1.fc34
> > 2021-01-28T02:18:31Z DEBUG IPA platform fedora
> > 2021-01-28T02:18:31Z DEBUG IPA os-release Fedora 34 (Server Edition 
> > Prerelease)
> > 2021-01-28T02:18:31Z DEBUG Available memory is 823529472B
> > ...
> > 2021-01-28T02:18:31Z DEBUG The ipa-server-install command failed, 
> > exception: ScriptError: Less than the minimum 1.2GB of RAM is available, 
> > 0.77GB available
> > 2021-01-28T02:18:31Z ERROR Less than the minimum 1.2GB of RAM is available, 
> > 0.77GB available
> > 2021-01-28T02:18:31Z ERROR The ipa-server-install command failed. See 
> > /var/log/ipaserver-install.log for more information
> 
> Enabling zram doesn't really "take away memory", because no pre-allocation 
> happens.
> If there is no physical swap, then adding zram0 should just shown additional
> swap space, so I don't think it could cause the check to fail.
> But if there is physical swap, the zram device is used with higher preference
> than the physical swap. So I think the explanation could be that the VM has
> a swap partition.

The openQA test in question runs after, and uses the hard disk from, a
test that runs a default Fedora Server install:
https://openqa.fedoraproject.org/tests/763650
which, AIUI, should not be creating a swap partition. The logs from the test -
https://openqa.fedoraproject.org/tests/763657/file/role_deploy_domain_controller-var_log.tar.gz
 - 
do not show any swaps being activated other than zram ones. So no, I
don't think there is a swap partition.
-- 
Adam Williamson
Fedora QA
IRC: adamw | Twitter: adamw_ha
https://www.happyassassin.net


___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-28 Thread Alexander Bokovoy

On to, 28 tammi 2021, Chris Murphy wrote:
  On Thu, Jan 28, 2021, 6:21 AM Alexander Bokovoy <[1]aboko...@redhat.com> 
  wrote:   
   
With today's OpenQA tests I can point out that using zram on 2048MB RAM
VMs actually breaks FreeIPA deployment:
[2]https://openqa.fedoraproject.org/tests/763006#step/role_deploy_domain_controller/35
   
OpenQA uses 2048MB RAM for QEMU VMs and this was typically OK for  
FreeIPA deployment with integrated CA and DNS server. Not anymore with 
zram activated:
   
Jan 27 21:17:47 fedora zram_generator::generator[25243]: Creating unit 
dev-zram0.swap (/dev/zram0 with 1384MB)
   
  Swap on zram isn't recently enabled in Fedora, so why are the tests  
  recently failing?
  Also, the default fraction is 0.5 so the zram device size should be  
  1024MB. Why is it 1384MB?


I have no idea why. This is Rawhide of today, automatically provisioned
in OpenQA. All logs are available in 'Logs and Artifacts' tab on the
OpenQA page referenced above.

Tests started to fail because we raised the low memory limit in FreeIPA
from 0.7GB to 1.2GB after seeing real world issues with lower memory
pressures.

which ends up eating 2/3rds of the whole memory budget and FreeIPA 
installer fails:   
   
  That's not possible with default settings. The device size is not the
  amount of memory used. The device size is virtual. The real amount used  
  depends on what's paged out to swap divided by the commission ratio. 
  If swap is being used at all it means the workload already used ~95% of  
  memory.  


In the OpenQA test there is nothing running on the system yet. This
literally happens when a test runs 'ipa-server-install' and we haven't
yet gone to configure *anything*. This check is one of the earliest in
the installer.

   
2021-01-28T02:18:31Z DEBUG ipa-server-install was invoked with arguments   
[] and options: {'unattended': True, 'ip_addresses': None, 
'domain_name': '[3]test.openqa.fedoraproject.org', 'realm_name':   
'[4]TEST.OPENQA.FEDORAPROJECT.ORG', 'host_name': None, 'ca_cert
2021-01-28T02:18:31Z DEBUG IPA version 4.9.1-1.fc34
2021-01-28T02:18:31Z DEBUG IPA platform fedora 
2021-01-28T02:18:31Z DEBUG IPA os-release Fedora 34 (Server Edition
Prerelease)
2021-01-28T02:18:31Z DEBUG Available memory is 823529472B  
...
2021-01-28T02:18:31Z DEBUG The ipa-server-install command failed,  
exception: ScriptError: Less than the minimum 1.2GB of RAM is available,   
0.77GB available.  
   
  We need more info. Something is consuming more memory than the   
  provisioning expects. If there was no swap, the problem would be worse.


Please look into OpenQA logs. There is a tarball with /var/log/* content
there (and few more things), including a full systemd journal which
might have some additional information.



--
/ Alexander Bokovoy
Sr. Principal Software Engineer
Security / Identity Management Engineering
Red Hat Limited, Finland
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-28 Thread Alexander Bokovoy

On to, 28 tammi 2021, Tomasz Torcz wrote:

On Thu, Jan 28, 2021 at 11:04:34AM -0500, Rob Crittenden wrote:

Zbigniew Jędrzejewski-Szmek wrote:
> On Thu, Jan 28, 2021 at 03:20:38PM +0200, Alexander Bokovoy wrote:
>> With today's OpenQA tests I can point out that using zram on 2048MB RAM
>> VMs actually breaks FreeIPA deployment:
>> 
https://openqa.fedoraproject.org/tests/763006#step/role_deploy_domain_controller/35
>>
>> OpenQA uses 2048MB RAM for QEMU VMs and this was typically OK for
>> FreeIPA deployment with integrated CA and DNS server. Not anymore with
>> zram activated:
>>
>> Jan 27 21:17:47 fedora zram_generator::generator[25243]: Creating unit 
dev-zram0.swap (/dev/zram0 with 1384MB)
>>
>> which ends up eating 2/3rds of the whole memory budget and FreeIPA
>> installer fails:
>>
>> 2021-01-28T02:18:31Z DEBUG ipa-server-install was invoked with arguments [] 
and options: {'unattended': True, 'ip_addresses': None, 'domain_name': 
'test.openqa.fedoraproject.org', 'realm_name': 'TEST.OPENQA.FEDORAPROJECT.ORG', 
'host_name': None, 'ca_cert
>> 2021-01-28T02:18:31Z DEBUG IPA version 4.9.1-1.fc34
>> 2021-01-28T02:18:31Z DEBUG IPA platform fedora
>> 2021-01-28T02:18:31Z DEBUG IPA os-release Fedora 34 (Server Edition 
Prerelease)
>> 2021-01-28T02:18:31Z DEBUG Available memory is 823529472B
>> ...
>> 2021-01-28T02:18:31Z DEBUG The ipa-server-install command failed, exception: 
ScriptError: Less than the minimum 1.2GB of RAM is available, 0.77GB available
>> 2021-01-28T02:18:31Z ERROR Less than the minimum 1.2GB of RAM is available, 
0.77GB available
>> 2021-01-28T02:18:31Z ERROR The ipa-server-install command failed. See 
/var/log/ipaserver-install.log for more information
>
> Enabling zram doesn't really "take away memory", because no pre-allocation 
happens.
> If there is no physical swap, then adding zram0 should just shown additional
> swap space, so I don't think it could cause the check to fail.
> But if there is physical swap, the zram device is used with higher preference
> than the physical swap. So I think the explanation could be that the VM has
> a swap partition. Before, some pages would be swapped out to zram, and some 
would
> be swapped out to the "real" swap. The fraction of RAM used for compressed 
zram
> would be on the order of 25% (zram-fraction=0.5 multiplied by typical 
compression 2:1).
>
> But now the kernel sees more zram swap, so it inserts pages there, taking away
> more of RAM, instead of saving pages to disk. So more memory (maybe 50% RAM) 
is
> used for the unswappable compressed pages. But this shouldn't break things:
> if there is enough pressure, pages would be swapped out to the physical swap 
device
> too.
>
> Assuming that this guess is correct, the check that ipa-server-install is
> doing should be adjusted. It should use the total available memory (ram + all 
kinds
> of swap) in the check, and not just available uncompressed pages.
> Or if it wants to ignore disk-based swap for some reason, it should use
> ram + zram + in-memory-zwap in the check.

For bare metal IPA uses the python3-psutil call:
psutil.virtual_memory.available()

I don't know how/if psutil reports zram (or cgroup v1 and v2 for that
matter).


psutil (in general) reports data from /proc/meminfo; available come
from MemAvailable: in that file. This is defined in kernel as:

MemAvailable: An estimate of how much memory is available for starting new
 applications, without swapping. Calculated from MemFree,
 SReclaimable, the size of the file LRU lists, and the low
 watermarks in each zone.
 The estimate takes into account that the system needs some
 page cache to function well, and that not all reclaimable
 slab will be reclaimable, due to items being in use. The
 impact of those factors will vary from system to system.

Notice "without swapping" in second line.  Next question, how zram impacts
reporting of MemAvailable by kernel?


This is a good note. If zram breaks kernel API promise to user space
(/proc/meminfo is one such API), how can it be enabled by default. I
also would question enabling zram by default if it does not play along
with cgroups. We do depend on cgroups being properly managed by systemd,
including resource allocation.

In my opinion, zram enablement in Fedora is quite premature.

--
/ Alexander Bokovoy
Sr. Principal Software Engineer
Security / Identity Management Engineering
Red Hat Limited, Finland
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-28 Thread Chris Murphy
On Thu, Jan 28, 2021, 6:21 AM Alexander Bokovoy  wrote:

>
> With today's OpenQA tests I can point out that using zram on 2048MB RAM
> VMs actually breaks FreeIPA deployment:
>
> https://openqa.fedoraproject.org/tests/763006#step/role_deploy_domain_controller/35
>
> OpenQA uses 2048MB RAM for QEMU VMs and this was typically OK for
> FreeIPA deployment with integrated CA and DNS server. Not anymore with
> zram activated:
>
> Jan 27 21:17:47 fedora zram_generator::generator[25243]: Creating unit
> dev-zram0.swap (/dev/zram0 with 1384MB)
>

Swap on zram isn't recently enabled in Fedora, so why are the tests
recently failing?

Also, the default fraction is 0.5 so the zram device size should be 1024MB.
Why is it 1384MB?


> which ends up eating 2/3rds of the whole memory budget and FreeIPA
> installer fails:
>

That's not possible with default settings. The device size is not the
amount of memory used. The device size is virtual. The real amount used
depends on what's paged out to swap divided by the commission ratio.

If swap is being used at all it means the workload already used ~95% of
memory.


> 2021-01-28T02:18:31Z DEBUG ipa-server-install was invoked with arguments
> [] and options: {'unattended': True, 'ip_addresses': None, 'domain_name': '
> test.openqa.fedoraproject.org', 'realm_name': '
> TEST.OPENQA.FEDORAPROJECT.ORG', 'host_name': None, 'ca_cert
> 2021-01-28T02:18:31Z DEBUG IPA version 4.9.1-1.fc34
> 2021-01-28T02:18:31Z DEBUG IPA platform fedora
> 2021-01-28T02:18:31Z DEBUG IPA os-release Fedora 34 (Server Edition
> Prerelease)
> 2021-01-28T02:18:31Z DEBUG Available memory is 823529472B
> ...
> 2021-01-28T02:18:31Z DEBUG The ipa-server-install command failed,
> exception: ScriptError: Less than the minimum 1.2GB of RAM is available,
> 0.77GB available.



We need more info. Something is consuming more memory than the provisioning
expects. If there was no swap, the problem would be worse.


Chris Murphy
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-28 Thread Tomasz Torcz
On Thu, Jan 28, 2021 at 11:04:34AM -0500, Rob Crittenden wrote:
> Zbigniew Jędrzejewski-Szmek wrote:
> > On Thu, Jan 28, 2021 at 03:20:38PM +0200, Alexander Bokovoy wrote:
> >> With today's OpenQA tests I can point out that using zram on 2048MB RAM
> >> VMs actually breaks FreeIPA deployment:
> >> https://openqa.fedoraproject.org/tests/763006#step/role_deploy_domain_controller/35
> >>
> >> OpenQA uses 2048MB RAM for QEMU VMs and this was typically OK for
> >> FreeIPA deployment with integrated CA and DNS server. Not anymore with
> >> zram activated:
> >>
> >> Jan 27 21:17:47 fedora zram_generator::generator[25243]: Creating unit 
> >> dev-zram0.swap (/dev/zram0 with 1384MB)
> >>
> >> which ends up eating 2/3rds of the whole memory budget and FreeIPA
> >> installer fails:
> >>
> >> 2021-01-28T02:18:31Z DEBUG ipa-server-install was invoked with arguments 
> >> [] and options: {'unattended': True, 'ip_addresses': None, 'domain_name': 
> >> 'test.openqa.fedoraproject.org', 'realm_name': 
> >> 'TEST.OPENQA.FEDORAPROJECT.ORG', 'host_name': None, 'ca_cert
> >> 2021-01-28T02:18:31Z DEBUG IPA version 4.9.1-1.fc34
> >> 2021-01-28T02:18:31Z DEBUG IPA platform fedora
> >> 2021-01-28T02:18:31Z DEBUG IPA os-release Fedora 34 (Server Edition 
> >> Prerelease)
> >> 2021-01-28T02:18:31Z DEBUG Available memory is 823529472B
> >> ...
> >> 2021-01-28T02:18:31Z DEBUG The ipa-server-install command failed, 
> >> exception: ScriptError: Less than the minimum 1.2GB of RAM is available, 
> >> 0.77GB available
> >> 2021-01-28T02:18:31Z ERROR Less than the minimum 1.2GB of RAM is 
> >> available, 0.77GB available
> >> 2021-01-28T02:18:31Z ERROR The ipa-server-install command failed. See 
> >> /var/log/ipaserver-install.log for more information
> > 
> > Enabling zram doesn't really "take away memory", because no pre-allocation 
> > happens.
> > If there is no physical swap, then adding zram0 should just shown additional
> > swap space, so I don't think it could cause the check to fail.
> > But if there is physical swap, the zram device is used with higher 
> > preference
> > than the physical swap. So I think the explanation could be that the VM has
> > a swap partition. Before, some pages would be swapped out to zram, and some 
> > would
> > be swapped out to the "real" swap. The fraction of RAM used for compressed 
> > zram
> > would be on the order of 25% (zram-fraction=0.5 multiplied by typical 
> > compression 2:1).
> > 
> > But now the kernel sees more zram swap, so it inserts pages there, taking 
> > away
> > more of RAM, instead of saving pages to disk. So more memory (maybe 50% 
> > RAM) is
> > used for the unswappable compressed pages. But this shouldn't break things:
> > if there is enough pressure, pages would be swapped out to the physical 
> > swap device
> > too.
> > 
> > Assuming that this guess is correct, the check that ipa-server-install is
> > doing should be adjusted. It should use the total available memory (ram + 
> > all kinds
> > of swap) in the check, and not just available uncompressed pages.
> > Or if it wants to ignore disk-based swap for some reason, it should use
> > ram + zram + in-memory-zwap in the check.
> 
> For bare metal IPA uses the python3-psutil call:
> psutil.virtual_memory.available()
> 
> I don't know how/if psutil reports zram (or cgroup v1 and v2 for that
> matter).

 psutil (in general) reports data from /proc/meminfo; available come
 from MemAvailable: in that file. This is defined in kernel as:

MemAvailable: An estimate of how much memory is available for starting new
  applications, without swapping. Calculated from MemFree,
  SReclaimable, the size of the file LRU lists, and the low
  watermarks in each zone.
  The estimate takes into account that the system needs some
  page cache to function well, and that not all reclaimable
  slab will be reclaimable, due to items being in use. The
  impact of those factors will vary from system to system.

Notice "without swapping" in second line.  Next question, how zram impacts
reporting of MemAvailable by kernel?

-- 
Tomasz Torcz “God, root, what's the difference?”
to...@pipebreaker.pl   “God is more forgiving.”
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-28 Thread Rob Crittenden
Zbigniew Jędrzejewski-Szmek wrote:
> On Thu, Jan 28, 2021 at 03:20:38PM +0200, Alexander Bokovoy wrote:
>> With today's OpenQA tests I can point out that using zram on 2048MB RAM
>> VMs actually breaks FreeIPA deployment:
>> https://openqa.fedoraproject.org/tests/763006#step/role_deploy_domain_controller/35
>>
>> OpenQA uses 2048MB RAM for QEMU VMs and this was typically OK for
>> FreeIPA deployment with integrated CA and DNS server. Not anymore with
>> zram activated:
>>
>> Jan 27 21:17:47 fedora zram_generator::generator[25243]: Creating unit 
>> dev-zram0.swap (/dev/zram0 with 1384MB)
>>
>> which ends up eating 2/3rds of the whole memory budget and FreeIPA
>> installer fails:
>>
>> 2021-01-28T02:18:31Z DEBUG ipa-server-install was invoked with arguments [] 
>> and options: {'unattended': True, 'ip_addresses': None, 'domain_name': 
>> 'test.openqa.fedoraproject.org', 'realm_name': 
>> 'TEST.OPENQA.FEDORAPROJECT.ORG', 'host_name': None, 'ca_cert
>> 2021-01-28T02:18:31Z DEBUG IPA version 4.9.1-1.fc34
>> 2021-01-28T02:18:31Z DEBUG IPA platform fedora
>> 2021-01-28T02:18:31Z DEBUG IPA os-release Fedora 34 (Server Edition 
>> Prerelease)
>> 2021-01-28T02:18:31Z DEBUG Available memory is 823529472B
>> ...
>> 2021-01-28T02:18:31Z DEBUG The ipa-server-install command failed, exception: 
>> ScriptError: Less than the minimum 1.2GB of RAM is available, 0.77GB 
>> available
>> 2021-01-28T02:18:31Z ERROR Less than the minimum 1.2GB of RAM is available, 
>> 0.77GB available
>> 2021-01-28T02:18:31Z ERROR The ipa-server-install command failed. See 
>> /var/log/ipaserver-install.log for more information
> 
> Enabling zram doesn't really "take away memory", because no pre-allocation 
> happens.
> If there is no physical swap, then adding zram0 should just shown additional
> swap space, so I don't think it could cause the check to fail.
> But if there is physical swap, the zram device is used with higher preference
> than the physical swap. So I think the explanation could be that the VM has
> a swap partition. Before, some pages would be swapped out to zram, and some 
> would
> be swapped out to the "real" swap. The fraction of RAM used for compressed 
> zram
> would be on the order of 25% (zram-fraction=0.5 multiplied by typical 
> compression 2:1).
> 
> But now the kernel sees more zram swap, so it inserts pages there, taking away
> more of RAM, instead of saving pages to disk. So more memory (maybe 50% RAM) 
> is
> used for the unswappable compressed pages. But this shouldn't break things:
> if there is enough pressure, pages would be swapped out to the physical swap 
> device
> too.
> 
> Assuming that this guess is correct, the check that ipa-server-install is
> doing should be adjusted. It should use the total available memory (ram + all 
> kinds
> of swap) in the check, and not just available uncompressed pages.
> Or if it wants to ignore disk-based swap for some reason, it should use
> ram + zram + in-memory-zwap in the check.

For bare metal IPA uses the python3-psutil call:
psutil.virtual_memory.available()

I don't know how/if psutil reports zram (or cgroup v1 and v2 for that
matter).

I considered including swap into the calculation but if you need the
swap just to install the thing then your experience is by definition
going to be poor (for the definition of swap being disk-based). In fact
if the system relies too much on disk-based swap then the installation
process can time out altogether.

> 
> It would be nice to see the output of 'swapon -s' and 'zramctl' and 'free'
> on that machine.
> 
>> While we can ask Adam to increase memory in those VMs, 2GB RAM was our
>> (FreeIPA) recommended lower level target for home deployments with
>> Celeron or RPI4 systems. Now zram use will force those systems to be
>> unusable out of the box.
> 
> That's certainly not the goal. The main goal of the Change is to support
> machines with less RAM, not require more RAM.
> 
> Zbyszek
> ___
> devel mailing list -- devel@lists.fedoraproject.org
> To unsubscribe send an email to devel-le...@lists.fedoraproject.org
> Fedora Code of Conduct: 
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: 
> https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
> 
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-28 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Jan 28, 2021 at 03:20:38PM +0200, Alexander Bokovoy wrote:
> With today's OpenQA tests I can point out that using zram on 2048MB RAM
> VMs actually breaks FreeIPA deployment:
> https://openqa.fedoraproject.org/tests/763006#step/role_deploy_domain_controller/35
> 
> OpenQA uses 2048MB RAM for QEMU VMs and this was typically OK for
> FreeIPA deployment with integrated CA and DNS server. Not anymore with
> zram activated:
> 
> Jan 27 21:17:47 fedora zram_generator::generator[25243]: Creating unit 
> dev-zram0.swap (/dev/zram0 with 1384MB)
> 
> which ends up eating 2/3rds of the whole memory budget and FreeIPA
> installer fails:
> 
> 2021-01-28T02:18:31Z DEBUG ipa-server-install was invoked with arguments [] 
> and options: {'unattended': True, 'ip_addresses': None, 'domain_name': 
> 'test.openqa.fedoraproject.org', 'realm_name': 
> 'TEST.OPENQA.FEDORAPROJECT.ORG', 'host_name': None, 'ca_cert
> 2021-01-28T02:18:31Z DEBUG IPA version 4.9.1-1.fc34
> 2021-01-28T02:18:31Z DEBUG IPA platform fedora
> 2021-01-28T02:18:31Z DEBUG IPA os-release Fedora 34 (Server Edition 
> Prerelease)
> 2021-01-28T02:18:31Z DEBUG Available memory is 823529472B
> ...
> 2021-01-28T02:18:31Z DEBUG The ipa-server-install command failed, exception: 
> ScriptError: Less than the minimum 1.2GB of RAM is available, 0.77GB available
> 2021-01-28T02:18:31Z ERROR Less than the minimum 1.2GB of RAM is available, 
> 0.77GB available
> 2021-01-28T02:18:31Z ERROR The ipa-server-install command failed. See 
> /var/log/ipaserver-install.log for more information

Enabling zram doesn't really "take away memory", because no pre-allocation 
happens.
If there is no physical swap, then adding zram0 should just shown additional
swap space, so I don't think it could cause the check to fail.
But if there is physical swap, the zram device is used with higher preference
than the physical swap. So I think the explanation could be that the VM has
a swap partition. Before, some pages would be swapped out to zram, and some 
would
be swapped out to the "real" swap. The fraction of RAM used for compressed zram
would be on the order of 25% (zram-fraction=0.5 multiplied by typical 
compression 2:1).

But now the kernel sees more zram swap, so it inserts pages there, taking away
more of RAM, instead of saving pages to disk. So more memory (maybe 50% RAM) is
used for the unswappable compressed pages. But this shouldn't break things:
if there is enough pressure, pages would be swapped out to the physical swap 
device
too.

Assuming that this guess is correct, the check that ipa-server-install is
doing should be adjusted. It should use the total available memory (ram + all 
kinds
of swap) in the check, and not just available uncompressed pages.
Or if it wants to ignore disk-based swap for some reason, it should use
ram + zram + in-memory-zwap in the check.

It would be nice to see the output of 'swapon -s' and 'zramctl' and 'free'
on that machine.

> While we can ask Adam to increase memory in those VMs, 2GB RAM was our
> (FreeIPA) recommended lower level target for home deployments with
> Celeron or RPI4 systems. Now zram use will force those systems to be
> unusable out of the box.

That's certainly not the goal. The main goal of the Change is to support
machines with less RAM, not require more RAM.

Zbyszek
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-28 Thread Alexander Bokovoy

On la, 23 tammi 2021, Chris Murphy wrote:

On Sat, Jan 23, 2021 at 4:29 AM Zbigniew Jędrzejewski-Szmek
 wrote:


Hi,

the proposal for Fedora 34 is to use zram-size == 1.0 * ram.
(Which I think is OK for the reasons listed in the Change page [0].)
But the original motivation for this change was boosting the size on
machines with little ram [1]. I wrote an exploratory patch [2] to specify
the size as a formula. From the docs:

> An alternative way to set the zram device size as a mathematical expressoin
> that can be used instead of 'zram-fraction' and 'max-zram-size'. Basic 
arithmetic
> operators like '*', '+', '-', '/', are supported, as well as 'min()' and 
'max()'
> and the variable 'ram' which specifies size of RAM in megabytes.
>
> Examples:
>
> # this is the same as the default config
> zram-size = min(0.5 * ram, 4096)
>
> # fraction 1.0 for first 4GB, and then fraction 0.5 above that
> zram-size = 1.0 * min(ram, 4096) + 0.5 * max(ram - 4096, 0)

Now I'm a bit torn: the code is nice enough, but it seems to be a solution
in search of a problem. So I thought I'd try a little crowd-sourcing:
Would we have a real use for something like this?

(One possible direction: one thing I want to explore next is using zram
or zwap based on whether the machine has a physical swap device. Maybe
such a language would be useful then — with additional variables
specifying e.g. the physical swap size…)


I think everything discussed so far is neutral to good for all use
cases (all editions and spins) being discussed.

But I also think it's a good idea for zram-generator to be a bit more
biased toward setups with one or more of:

- low memory (4G or less, somewhat subjective)
- limited life storage (eMMC, SD Card, USB stick)
- slow drive (primarily rotational, but also all of the above)
- no swap (zram-based swap is better than no swap)

The above categories pretty much mean that the improvements for
disk-based swap that are in-progress upstream, aren't likely to be
used. That means zram-based swap usage will continue to provide
significant benefit.


With today's OpenQA tests I can point out that using zram on 2048MB RAM
VMs actually breaks FreeIPA deployment:
https://openqa.fedoraproject.org/tests/763006#step/role_deploy_domain_controller/35

OpenQA uses 2048MB RAM for QEMU VMs and this was typically OK for
FreeIPA deployment with integrated CA and DNS server. Not anymore with
zram activated:

Jan 27 21:17:47 fedora zram_generator::generator[25243]: Creating unit 
dev-zram0.swap (/dev/zram0 with 1384MB)

which ends up eating 2/3rds of the whole memory budget and FreeIPA
installer fails:

2021-01-28T02:18:31Z DEBUG ipa-server-install was invoked with arguments [] and 
options: {'unattended': True, 'ip_addresses': None, 'domain_name': 
'test.openqa.fedoraproject.org', 'realm_name': 'TEST.OPENQA.FEDORAPROJECT.ORG', 
'host_name': None, 'ca_cert
2021-01-28T02:18:31Z DEBUG IPA version 4.9.1-1.fc34
2021-01-28T02:18:31Z DEBUG IPA platform fedora
2021-01-28T02:18:31Z DEBUG IPA os-release Fedora 34 (Server Edition Prerelease)
2021-01-28T02:18:31Z DEBUG Available memory is 823529472B
...
2021-01-28T02:18:31Z DEBUG The ipa-server-install command failed, exception: 
ScriptError: Less than the minimum 1.2GB of RAM is available, 0.77GB available
2021-01-28T02:18:31Z ERROR Less than the minimum 1.2GB of RAM is available, 
0.77GB available
2021-01-28T02:18:31Z ERROR The ipa-server-install command failed. See 
/var/log/ipaserver-install.log for more information

While we can ask Adam to increase memory in those VMs, 2GB RAM was our
(FreeIPA) recommended lower level target for home deployments with
Celeron or RPI4 systems. Now zram use will force those systems to be
unusable out of the box.




That's the tl;dr and now it's giant text wall time...

The remaining category is "everyone else", i.e. >= 8G RAM, and a
reasonably performant SATA SSD or NVMe. This category benefits overall
with the swap on zram approach, mainly because swap thrashing is just
so terrible. However, I expect the future is a return to disk based
swap for two reasons: (1) given highly variable workloads, having 100%
eviction efficacy decomplicates memory management and resource control
(2) there are upstream improvements happening incrementally that are
improving swap performance. e.g. the anonymous memory balancing logic
has been totally reworked.

Neither zram nor zswap support cgroupvs2. There's work happening on
getting zswap cgroups compatible as well as integrating it into memory
management rather than having all these different buffet-style add-ons
that distros and users have to evaluate and integrate. The swap
improvements started happening in kernel 5.8, and I'd say they're
opt-in testable [1] for folks using kernel 5.10+ - whereby they can
switch back and forth between exclusively zram-based and disk-based
swap, to help evaluate what's working better and what isn't.

This is not a case of us moving back to disk-based swap soon. There

Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-23 Thread Chris Murphy
On Sat, Jan 23, 2021 at 4:39 PM Matthew Miller  wrote:
>
> On Sat, Jan 23, 2021 at 03:11:32PM -0700, Chris Murphy wrote:
> > - low memory (4G or less, somewhat subjective)
>
> From what I can see in my informal survey, a lot of 8GB users are benefiting
> too, with 1-2GB in the zram swap being common, often with that compressing
> very well.

Oh they definitely do. That list is just to indicate the
configurations likely to want to avoid disk-based swap and IO, even as
disk-based swap is receiving improvements. Therefore, I think it's OK
for zram-generator enhancements to be slightly biased toward those
configurations.

-- 
Chris Murphy
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-23 Thread Matthew Miller
On Sat, Jan 23, 2021 at 03:11:32PM -0700, Chris Murphy wrote:
> - low memory (4G or less, somewhat subjective)

From what I can see in my informal survey, a lot of 8GB users are benefiting
too, with 1-2GB in the zram swap being common, often with that compressing
very well.


-- 
Matthew Miller

Fedora Project Leader
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-23 Thread Chris Murphy
On Sat, Jan 23, 2021 at 4:29 AM Zbigniew Jędrzejewski-Szmek
 wrote:
>
> Hi,
>
> the proposal for Fedora 34 is to use zram-size == 1.0 * ram.
> (Which I think is OK for the reasons listed in the Change page [0].)
> But the original motivation for this change was boosting the size on
> machines with little ram [1]. I wrote an exploratory patch [2] to specify
> the size as a formula. From the docs:
>
> > An alternative way to set the zram device size as a mathematical expressoin
> > that can be used instead of 'zram-fraction' and 'max-zram-size'. Basic 
> > arithmetic
> > operators like '*', '+', '-', '/', are supported, as well as 'min()' and 
> > 'max()'
> > and the variable 'ram' which specifies size of RAM in megabytes.
> >
> > Examples:
> >
> > # this is the same as the default config
> > zram-size = min(0.5 * ram, 4096)
> >
> > # fraction 1.0 for first 4GB, and then fraction 0.5 above that
> > zram-size = 1.0 * min(ram, 4096) + 0.5 * max(ram - 4096, 0)
>
> Now I'm a bit torn: the code is nice enough, but it seems to be a solution
> in search of a problem. So I thought I'd try a little crowd-sourcing:
> Would we have a real use for something like this?
>
> (One possible direction: one thing I want to explore next is using zram
> or zwap based on whether the machine has a physical swap device. Maybe
> such a language would be useful then — with additional variables
> specifying e.g. the physical swap size…)

I think everything discussed so far is neutral to good for all use
cases (all editions and spins) being discussed.

But I also think it's a good idea for zram-generator to be a bit more
biased toward setups with one or more of:

- low memory (4G or less, somewhat subjective)
- limited life storage (eMMC, SD Card, USB stick)
- slow drive (primarily rotational, but also all of the above)
- no swap (zram-based swap is better than no swap)

The above categories pretty much mean that the improvements for
disk-based swap that are in-progress upstream, aren't likely to be
used. That means zram-based swap usage will continue to provide
significant benefit.

That's the tl;dr and now it's giant text wall time...

The remaining category is "everyone else", i.e. >= 8G RAM, and a
reasonably performant SATA SSD or NVMe. This category benefits overall
with the swap on zram approach, mainly because swap thrashing is just
so terrible. However, I expect the future is a return to disk based
swap for two reasons: (1) given highly variable workloads, having 100%
eviction efficacy decomplicates memory management and resource control
(2) there are upstream improvements happening incrementally that are
improving swap performance. e.g. the anonymous memory balancing logic
has been totally reworked.

Neither zram nor zswap support cgroupvs2. There's work happening on
getting zswap cgroups compatible as well as integrating it into memory
management rather than having all these different buffet-style add-ons
that distros and users have to evaluate and integrate. The swap
improvements started happening in kernel 5.8, and I'd say they're
opt-in testable [1] for folks using kernel 5.10+ - whereby they can
switch back and forth between exclusively zram-based and disk-based
swap, to help evaluate what's working better and what isn't.

This is not a case of us moving back to disk-based swap soon. There
still is no cgroups support in device-mapper, and right now the only
way to secure swap is to put it on dm-crypt. One of the benefits of
zram-based swap is, it's volatile, so any leaks of personal
information can be ignored (at least if the system is powered off).
Another issue is dynamically creating/removing swapfiles. We kinda
want to avoid partitions because that preallocation siphons away a
possibly limited resource that may not get used at all; and also
related is still pending work (not yet happening) on Secure Boot and
hibernation images which would necessarily need disk based swap.

Anyway there's quite a lot of work happening, and even though it isn't
ready to be used by default in Fedora, it is a good time for early
adopters to do performance testing as this work continues. I
anticipate the server and desktop will eventually move away from
zram-based swap, but I can't give a time frame for it.


[1]
For the early adopters who want to experiment with their swap
dependent workloads and different configurations:
   https://github.com/facebookexperimental/resctl-demo

[2]
One thing not discussed much is where to put the swapfile on Btrfs.
This is my current suggestion:

btrfs sub create /var/swap
chattr +C /var/swap
fallocate -l 4G /var/swap/swapfile1
mkswap /var/swap/swapfile1
swapon /var/swap/swapfile1

Be sure to read the limitations in 'man 5 btrfs' - the above takes
care of most concerns, the other one is it needs to be a single device
Btrfs. Other arrangements are possible. Ping me on irc (cmurf) if you
have questions about alternatives.

--
Chris Murphy
___
devel 

Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-23 Thread Matthew Miller
On Sat, Jan 23, 2021 at 11:28:00AM +, Zbigniew Jędrzejewski-Szmek wrote:
> Now I'm a bit torn: the code is nice enough, but it seems to be a solution
> in search of a problem. So I thought I'd try a little crowd-sourcing:
> Would we have a real use for something like this?

I like the syntax. I'm curious about this too, particularly in the middle.
There are lots of new laptops out there with 8GB of RAM. 

* 
https://discussion.fedoraproject.org/t/request-for-fedora-users-with-8gb-of-ram-is-zram-helping-you/26226

* https://twitter.com/mattdm/status/1353005390129229834


-- 
Matthew Miller

Fedora Project Leader
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora 34 Change: Scale ZRAM to Full Memory Size — arbitrary scaling

2021-01-23 Thread Zbigniew Jędrzejewski-Szmek
Hi,

the proposal for Fedora 34 is to use zram-size == 1.0 * ram.
(Which I think is OK for the reasons listed in the Change page [0].)
But the original motivation for this change was boosting the size on
machines with little ram [1]. I wrote an exploratory patch [2] to specify
the size as a formula. From the docs:

> An alternative way to set the zram device size as a mathematical expressoin
> that can be used instead of 'zram-fraction' and 'max-zram-size'. Basic 
> arithmetic
> operators like '*', '+', '-', '/', are supported, as well as 'min()' and 
> 'max()'
> and the variable 'ram' which specifies size of RAM in megabytes.
> 
> Examples:
> 
> # this is the same as the default config
> zram-size = min(0.5 * ram, 4096)
> 
> # fraction 1.0 for first 4GB, and then fraction 0.5 above that
> zram-size = 1.0 * min(ram, 4096) + 0.5 * max(ram - 4096, 0)

Now I'm a bit torn: the code is nice enough, but it seems to be a solution
in search of a problem. So I thought I'd try a little crowd-sourcing:
Would we have a real use for something like this?

(One possible direction: one thing I want to explore next is using zram
or zwap based on whether the machine has a physical swap device. Maybe
such a language would be useful then — with additional variables
specifying e.g. the physical swap size…)

[0] https://fedoraproject.org/wiki/Changes/Scale_ZRAM_to_full_memory_size
[1] https://github.com/systemd/zram-generator/issues/51
[2] https://github.com/systemd/zram-generator/pull/64

Zbyszek
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org