Re: Request for testing: Fedora 37 pre-Beta validation tests
Hi Adam Always on a long weekend. I have an issue with the installation of the workspace 37beta. BTW, rawhide is OK for installation (clear images), ditto for reinstallation of F36live. Am I the only one getting screen corruption with anaconda and with some parts of Workspace 37. I am getting active snow flakes in the frames of pages, and within the contents, both for anaconda and the live workspace installationI am still able to click on the options, but feel that someone should verify that I am not alone with this issue.I used the version in the web page -dated 0831. I am retesting with the 0901 version and will get back to you. Regards Leslie Leslie Satenstein Montréal Québec, Canada On Monday, August 29, 2022 at 08:22:54 p.m. EDT, Adam Williamson wrote: Hey folks! So we're in freeze for Fedora 37 Beta now, and the first go/no-go meeting should be on September 8. It would be really great if we can get the validation tests run now so we can find any remaining blocker bugs in good time to get them fixed. Right now the blocker list looks short, but there are definitely some tests that have not been run. You can use the testcase_stats view to find tests that need running: https://openqa.fedoraproject.org/testcase_stats/37/ For each validation test set (Base, Desktop etc.) it shows when each test was last performed. So you can easily look for Basic and Beta tests that have not yet been run. We need to run all of these. You can enter results using `relval report-results`, or edit the summary results page at https://fedoraproject.org/wiki/Test_Results:Current_Summary . That's a redirect link which will always point to the validation results page for the currently-nominated compose, which right now is 20220826.n.0. Sumantro will be running a validation 'test week' starting on Wednesday, so you can drop by the Fedora Test Day room on chat.fedoraproject.org to hang out with other testers and get any help you need in testing. See https://lists.fedoraproject.org/archives/list/test-annou...@lists.fedoraproject.org/message/KVVU6JVOKF4WI4ZS6AFLB7IVBCCNKFCX/ for that announcement. Thanks folks! -- Adam Williamson Fedora QA IRC: adamw | Twitter: adamw_ha https://www.happyassassin.net ___ kde mailing list -- k...@lists.fedoraproject.org To unsubscribe send an email to kde-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/k...@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: Heads-up / for discussion: dnf not working with 1G of RAM or less
What is the contents of /etc/dnf/dnf.conf It might help to see if there is a setting therein that is causing the mentioned issue. Regards Leslie Leslie Satenstein Montréal Québec, Canada On Sunday, August 28, 2022 at 11:24:30 p.m. EDT, Adam Williamson wrote: Hey folks! I apologize for the wide distribution, but this seemed like a bug it'd be appropriate to get a wide range of input on. There's a bug that was proposed as an F37 Beta blocker: https://bugzilla.redhat.com/show_bug.cgi?id=1907030 it's quite an old bug, but up until recently, the summary was apparently accurate - dnf would run out of memory with 512M of RAM, but was OK with 1G. However, as of quite recently, on F36 at least (not sure if anyone's explicitly tested F37), dnf operations are commonly failing on VMs/containers with 1G of RAM due to running out of RAM and getting OOM-killed. There's some discussion in the bug about what might be causing this and potential ways to resolve it, and please do dig into/contribute to that if you can, but the other question here I guess is: how much do we care about this? How bad is it that you can't reliably run dnf operations on top of a minimal Fedora environment with 1G of RAM? This obviously has some overlap with our stated hardware requirements, so here they are for the record: https://docs.fedoraproject.org/en-US/fedora/latest/release-notes/welcome/Hardware_Overview/ that specifies 2GB as the minimum memory for "the default installation", by which I think it's referring to a default Workstation install, though this should be clarified. But then there's a "Low memory installations" boxout, which suggests that "users with less than 768MB of system memory may have better results performing a minimal install and adding to it afterward", which kinda is recommending that people do exactly the thing that doesn't work (do a minimal install then use dnf on it), and implying it'll work. After some consideration I don't think it makes sense to take this bug as an F37 blocker, since it already affects F36, and that's what I'll be suggesting at the next blocker review meeting. However, it does seem a perfect candidate for prioritized bug status, and I've nominated it for that. I guess if folks can chime in with thoughts here and/or in the bug report, maybe a consensus will emerge on just how big of an issue this is (and how likely it is to get fixed). There will presumably be a FESCo ticket related to prioritized bug status too. Thanks folks! -- Adam Williamson Fedora QA IRC: adamw | Twitter: adamw_ha https://www.happyassassin.net ___ kde mailing list -- k...@lists.fedoraproject.org To unsubscribe send an email to kde-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/k...@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
Re: Btrfs question for Fedora 33 beta. How can I add nocow to /var
Fedora Everything Presentation from youtube. https://www.youtube.com/watch?v=qOv-EFdVoss Regards Leslie Leslie Satenstein Montréal Québec, Canada On Sunday, October 25, 2020, 10:38:33 p.m. EDT, Chris Murphy wrote: On Sun, Oct 25, 2020 at 6:56 PM Leslie Satenstein via devel wrote: > > Hi Chris, > > This weekend past, I did create /opt and /var as subvolumes. For the empty > /opt, it was easy. For /var, it took the live ISO to help with moving > directory /var to subvolume /var. Recommended reading. https://btrfs.wiki.kernel.org/index.php/SysadminGuide#Layout The "flat" method for /var means it actually gets swapped from the 'old dir var' to 'new subvolume var' at reboot time via the fstab entry resulting in it being mounted on /var. You can mount the Btrfs file system again at /mnt and clean out the root/var directory. There is a sort of "through the looking glass" experience with "flat" layout. I often regret not giving subvolumes names different from their mount point if I use this layout style. So instead of a "var" subvolume mounted at /var, I'll name it var33 (i.e. var for Fedora 33). Maybe we'll end up using native system mount units for these kinds of things so that fstab isn't overly complicated. The "nested" layout just substitutes in-place, no need for fstab entry. Way simpler. Except if you ever have to do a rollback, and then you have to move it into the new location before the rollback. That isn't always possible or maybe you wouldn't have to rollback in the first place. > I also intend to do the same with /sys on this, my beta system, I'm not sure about this. /sys is a pseudo-filesystem, the contents aren't really on the root file system. > The rational for my doing the subvolume exercise is the following: > 1) Under default installation, each snapfile of root has a copy of all > subdirectories including active /var, and /sys. I noted that /var/log and > /var/cache is rather volatile. My SSD size is 120gigs > 2) By isolating /var, /opt and /sys the root snapshot becomes less bloated. /var contains the rpm database, which means at least /var/lib/rpm is tied to /usr and to some degree /boot. That means any need to rollback one requires rolling back all in conjunction. And yet /var/log is one thing we probably do not want to rollback, and possibly the same for /var/cache in its entirety. But... work-in-progress. > 4) But /var/log is quite active, as is /var/cache. I will be using btrfs > defragment on /var. If it's a (spinning) HDD it might be suited for the autodefrag mount option so long as the workload isn't heavy database use. Autodefrag is intended for light database use like web browsers, and spinning drives. There's a proposal upstream to make it a settable property so it can be enabled selectively, rather than as a mount option. I don't ever defragment SSD/NVMe. I think you're better off using bcc-tools' fileslower to evaluate if there are unexpected latencies happening. And quantifying how much defragmenting solves the problem. You can further narrow these down the source of latency with btrfsdist, btrfsslower and biolatency. (There are ext4 and xfs equivalents.) > 5) Having /var as nodatacow puts /var to the same risk level as when /var was > on the ext4 system. If you're wanting to save space, you may want to experiment with compression. And nodatacow means no compression. I think you want to be more selective with nodatacow for very specific reasons and already the top candidate for that already get's nodatacow set automatically by libvirt when a storage pool is created. > > I will be using fstrim -a (to do SSD trims) and I want to user snapper, to > manage the subvolume snapshots. The generations of snapshots for /var and / > will be my objective. > I will be using crontab and a script to take snapshots just before the > script launches "sudo dnf update -y". OK you might want to look at the dnf snapper plugin for this. > This switch to btrfs is a learning experience for me. Fedora is my passion, > From my studies, I may discover that you are absolutely right to state that I > do not need to make the extra subvolumes. > The advantage I have over you is my career. It is called "retirement". > Retirement comes with spare time to study, to learn, to write code, to > explore, experiment and to share experiences. Cool! Have fun! -- Chris Murphy ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Btrfs question for Fedora 33 beta. How can I add nocow to /var
Thank you all. Is it worthwhile putting compress=lzo for root. Without concern, I have done it for /home. Regards Leslie Leslie Satenstein Montréal Québec, Canada On Monday, October 26, 2020, 2:17:40 a.m. EDT, Ian Kent wrote: On Sun, 2020-10-25 at 20:38 -0600, Chris Murphy wrote: > On Sun, Oct 25, 2020 at 6:56 PM Leslie Satenstein via devel > > > I also intend to do the same with /sys on this, my beta system, > > I'm not sure about this. /sys is a pseudo-filesystem, the contents > aren't really on the root file system. That's right, like /proc is a proc file system, /sys is a sysfs file system (similar to other pseudo file systems). They are memory based and files are generated on access, proc files mostly directly from kernel data structures, and sysfs mostly from an rb-tree data structure within the sysfs file system populated at boot. You must mount /sys as a sysfs file system in order for the rb-tree to be populated, you can't make it a btrfs file system (or any other file system for that matter). Ian ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Btrfs question for Fedora 33 beta. How can I add nocow to /var
Hi Chris I am again attaching a program for you to try.I AM THE AUTHOR. Extended to manage btrfs entries. fstabxref -o /tmp/fstab It reads the /dev/disk/by-xxx contents,/etc/mtab and /etc/fstab It validates each fstab entry as it builds the output. Errors are pointed out. In the process of coding it, if you have "btrfs sub create var" at the 5 levelmtab will show it as /var, ditto for the other subvols. And changing the /etc/fstab to include the / before the var in the fstab as subvol=var appears to make no difference if it is subvol=var or subvol=/var Things I discover. If you look at other distros based on btrfs, the root00 is replaced by @so @home, @var etc are used. For a while that @ thing stumped me. But then, I persisted. I am going to study your link's contents. My code is opensource and actually I find it very useful. Use labels, UUIDs, PARTUUIDS, PARTLABELS, /dev/, my code handles it. If you find it useful and want the source git repro, just ask. Regards Leslie Leslie Satenstein Montréal Québec, Canada On Sunday, October 25, 2020, 10:38:33 p.m. EDT, Chris Murphy wrote: On Sun, Oct 25, 2020 at 6:56 PM Leslie Satenstein via devel wrote: > > Hi Chris, > > This weekend past, I did create /opt and /var as subvolumes. For the empty > /opt, it was easy. For /var, it took the live ISO to help with moving > directory /var to subvolume /var. Recommended reading. https://btrfs.wiki.kernel.org/index.php/SysadminGuide#Layout The "flat" method for /var means it actually gets swapped from the 'old dir var' to 'new subvolume var' at reboot time via the fstab entry resulting in it being mounted on /var. You can mount the Btrfs file system again at /mnt and clean out the root/var directory. There is a sort of "through the looking glass" experience with "flat" layout. I often regret not giving subvolumes names different from their mount point if I use this layout style. So instead of a "var" subvolume mounted at /var, I'll name it var33 (i.e. var for Fedora 33). Maybe we'll end up using native system mount units for these kinds of things so that fstab isn't overly complicated. The "nested" layout just substitutes in-place, no need for fstab entry. Way simpler. Except if you ever have to do a rollback, and then you have to move it into the new location before the rollback. That isn't always possible or maybe you wouldn't have to rollback in the first place. > I also intend to do the same with /sys on this, my beta system, I'm not sure about this. /sys is a pseudo-filesystem, the contents aren't really on the root file system. > The rational for my doing the subvolume exercise is the following: > 1) Under default installation, each snapfile of root has a copy of all > subdirectories including active /var, and /sys. I noted that /var/log and > /var/cache is rather volatile. My SSD size is 120gigs > 2) By isolating /var, /opt and /sys the root snapshot becomes less bloated. /var contains the rpm database, which means at least /var/lib/rpm is tied to /usr and to some degree /boot. That means any need to rollback one requires rolling back all in conjunction. And yet /var/log is one thing we probably do not want to rollback, and possibly the same for /var/cache in its entirety. But... work-in-progress. > 4) But /var/log is quite active, as is /var/cache. I will be using btrfs > defragment on /var. If it's a (spinning) HDD it might be suited for the autodefrag mount option so long as the workload isn't heavy database use. Autodefrag is intended for light database use like web browsers, and spinning drives. There's a proposal upstream to make it a settable property so it can be enabled selectively, rather than as a mount option. I don't ever defragment SSD/NVMe. I think you're better off using bcc-tools' fileslower to evaluate if there are unexpected latencies happening. And quantifying how much defragmenting solves the problem. You can further narrow these down the source of latency with btrfsdist, btrfsslower and biolatency. (There are ext4 and xfs equivalents.) > 5) Having /var as nodatacow puts /var to the same risk level as when /var was > on the ext4 system. If you're wanting to save space, you may want to experiment with compression. And nodatacow means no compression. I think you want to be more selective with nodatacow for very specific reasons and already the top candidate for that already get's nodatacow set automatically by libvirt when a storage pool is created. > > I will be using fstrim -a (to do SSD trims) and I want to user snapper, to > manage the subvolume snapshots. The generations of snapshots for /var and / > will be my objective. > I will be using crontab and a script to take snapshots just before the > script launches "sudo dnf update -y". OK you might want to look at the dnf snapper plugin for this. > This switch to btrfs is a learning experience for me. Fedora is my passion, > From my studie