On Fri, Jun 5, 2020 at 8:12 PM Samuel Sieb <sam...@sieb.net> wrote:
>
> On 6/5/20 6:59 PM, Chris Murphy wrote:
> > On Fri, Jun 5, 2020 at 6:47 PM Samuel Sieb <sam...@sieb.net> wrote:
> >>
> >> I installed the zram package and noticed the systemd-swap package, so
> >> installed that also.
> >
> > There are conflicting implementations:
> >
> > anaconda package provides zram.service
> > zram package provides zram-swap.service
> > systemd-swap package provides
>
> Did you leave something out?

systemd-swap package provides systemd-swap.service

> Are you saying that zram and systemd-swap both provide configuration for
> zram?

All three of those listed provide competing configurations for swap on
zram. Just to make a fine point, zram is generic, it is not swap
specific. It's just a compressed ram disk. Zswap is a different thing,
it is swap specific, providing a memory based writeback cache to a
disk based swap.




>
> > I've only casually tested systemd-swap package. Note this isn't an
> > upstream systemd project. Where as the proposed rust zram-generator is
> > "upstream" in that it's maintained by the same folks, but is not
> > provided by systemd package I think because it's in rust.
>
> Ok, I was thinking the generator might require rebooting to get it to
> work.  And I saw the systemd-swap package and thought that sounded
> useful to try.

The generator does require a reboot to change configurations. You
could absolutely say, but Chris, the other ones you can just systemctl
restart! That is true, but except for testing, I don't see that as an
advantage compared to the overall simplisticity of zram-generator and
reusing the existing infrastructure in systemd that's already well
tested and maintained.


>
> > There shouldn't be any weird/bad interactions between them, but it is
> > possible for the user to become very confused which one is working. It
> > *should* be zram-generator because it runs much earlier during boot
> > than the others. But I have not thoroughly tested for conflicting
> > interactions, mainly just sanity testing to make sure things don't
> > blow up.
>
> I only started the one service, so I don't think there are any conflicts.

So what you can do is disable all the above listed service units. And
you'll be testing the zram-generator alone. The config file for that
is /etc/systemd/zram-generator.conf

Since there isn't yet an option to set swap priority, chances are it
gets auto-assigned during boot by the kernel. Usually /dev/zram0 comes
first and should get -2 priority and /dev/swapondisk will get a -3. In
that case, zram is higher priority already. But if flipped, you can
just:

swapoff /dev/zram0
swapon -p 3000 /dev/zram0

Although really any value higher than the disk based swap is sufficient.



>
> >> I adjusted the zram setting to 4G and reduced
> >> zswap a bit.  I have no idea what that is doing, it doesn't seem to
> >> affect anything I can measure.  The overall improvement in
> >> responsiveness is very nice.
> >
> > It might be you're modifying the configuration of a different
> > implementation from the one that's actually setting up swaponzram.
>
> No, it was quite clear that I was modifying the right config.  It's the
> /etc/systemd/swap.conf as described in the man page and it was affecting
> the result.

OK that is not for zram-generator. That's for one of the others. Off
hand I don't know which one it's for, this is way too confusing
because of all the competing implementations, which is part of the
motivation of the feature -> buh bye, thank you for your service!


>
> >> I don't understand the numbers I'm getting for these.  I disabled my
> >> swap partition to force as much to go to zram as possible and then
> >> turned it back on.
> >>
> >> # swapon
> >> NAME      TYPE      SIZE USED  PRIO
> >> /dev/sda3 partition  16G 1.9G    -2
> >> /zram0    partition   4G   4G 32767
> >>
> >> This looks like I'm using all 4G of allocated space in the zram swap, but:
> >>
> >> # zramctl
> >> NAME       ALGORITHM DISKSIZE  DATA  COMPR  TOTAL STREAMS MOUNTPOINT
> >> /dev/zram0 lz4             4G  1.8G 658.5M 679.6M       4
> >>
> >> This suggests that it's only using 1.8G.  Can you explain what this means?
> >
> > Yeah that's confusing. zramctl just gets info from sysfs, but you
> > could double check it by
> >
> > cat /sys/block/zram0/mm_stat
> >
> > The first value should match "DATA" column in zramctl (which reports in 
> > MiB).
> >
> > While the kernel has long had support for using up to 32 swap devices
> > at the same time, this is seldom used in practice so it could be an
> > artifact of this? Indicating that all of this swap is "in use" from
> > the swap perspective, where zramctl is telling you the truth about
> > what the zram kernel module is actually using. Is it a cosmetic
> > reporting bug or intentional? Good question. I'll try to reproduce and
> > report it upstream and see what they say. But if you beat me to it
> > that would be great, and then I can just write the email for linux-mm
> > and cite your bug report. :D
>
> Part of my concern is that if it's not actually full, then why is it
> using so much of the disk swap?

Not sure. What should be true is if you swapoff on /dev/sda3 it'll
move any referenced anon pages to /dev/zram0. And then if you swapon
/dev/sda3 it will use 0 bytes until /dev/zram0 is full. What kernel
version are you using?

> For upstream, do you mean the kernel?

Yes. bugzilla.kernel.org - this goes to the linux-mm folks (memory
management) but you can search for a zram bug and just see what
component they use and post the bug here and I'll pick it up.



-- 
Chris Murphy
_______________________________________________
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org

Reply via email to