Dave posted on Wed, 20 Sep 2017 02:38:13 -0400 as excerpted: > Here's my scenario. Some months ago I built an over-the-top powerful > desktop computer / workstation and I was looking forward to really > fantastic performance improvements over my 6 year old Ubuntu machine. I > installed Arch Linux on BTRFS on the new computer (on an SSD). To my > shock, it was no faster than my old machine. I focused a lot on Firefox > performance because I use Firefox a lot and that was one of the > applications in which I was most looking forward to better performance. > > I tried everything I could think of and everything recommended to me in > various forums (except switching to Windows) and the performance > remained very disappointing. > > Then today I read the following: > > Gotchas - btrfs Wiki https://btrfs.wiki.kernel.org/index.php/Gotchas > > Fragmentation: Files with a lot of random writes can become > heavily fragmented (10000+ extents) causing excessive multi-second > spikes of CPU load on systems with an SSD or large amount a RAM. On > desktops this primarily affects application databases (including > Firefox). Workarounds include manually defragmenting your home directory > using btrfs fi defragment. Auto-defragment (mount option autodefrag) > should solve this problem. > > Upon reading that I am wondering if fragmentation in the Firefox profile > is part of my issue. That's one thing I never tested previously. (BTW, > this system has 256 GB of RAM and 20 cores.) > > Furthermore, on the same BTRFS Wiki page, it mentions the performance > penalties of many snapshots. I am keeping 30 to 50 snapshots of the > volume that contains the Firefox profile. > > Would these two things be enough to turn top-of-the-line hardware into a > mediocre-preforming desktop system? (The system performs fine on > benchmarks -- it's real life usage, particularly with Firefox where it > is disappointing.) > > After reading the info here, I am wondering if I should make a new > subvolume just for my Firefox profile(s) and not use COW and/or not keep > snapshots on it and mount it with the autodefrag option. > > As part of this strategy, I could send snapshots to another disk using > btrfs send-receive. That way I would have the benefits of snapshots > (which are important to me), but by not keeping any snapshots on the > live subvolume I could avoid the performance problems. > > What would you guys do in this situation?
[FWIW this is my second try at a reply, my first being way too detailed and going off into the weeds somewhere, so I killed it.] That's an interesting scenario indeed, and perhaps I can help, since my config isn't near as high end as yours, but I run firefox on btrfs on ssds, and have no performance complaints. The difference is very likely due to one or more of the following (FWIW I'd suggest a 4-3-1-2 order, tho only 1 and 2 are really btrfs related): 1) I make sure I consistently mount with autodefrag, from the first mount after the filesystem is created in ordered to first populate it, on. The filesystem never gets fragmented, forcing writes to highly fragmented free space, in the first place. (With the past and current effect of the ssd mount option under discussion to change, it's possible I'll get more fragmentation in the future after ssd doesn't try so hard to find reasonably large free-space chunks to write into, but it has been fine so far.) 2) Subvolumes and snapshots seemed to me more trouble than they were worth, particularly since it's the same filesystem anyway, and if it's damaged, it'll take all the subvolumes and snapshots with it. So I don't use them, preferring instead to use real partitioning and more smaller fully separate filesystems, some of which aren't mounted by default (and root mounted read-only by default), so there's little chance they'll be damaged in a crash or filesystem bug damage scenario. And if there /is/ any damage, it's much more limited in scope since all my data eggs aren't in the same basket, so maintenance such as btrfs check and scrub take far less time (and check far less memory) than they would were it one big pool with snapshots. And if recovery fails too, the backups are likewise small filesystems the same size as the working copies, so copying the data back over takes far less time as well (not to mention making the backups takes less time in the first place, so it's easier to regularly update them). 3) Austin mentioned the firefox cache. I honestly wouldn't know on it, since I have firefox configured to use a tmpfs for its cache, so it operates at memory speed and gets cleared along with its memory at every reboot or tmpfs umount. My inet speed is fast enough I don't really need cache anyway, but it's nice to have it, operating at memory speed, within a single boot session... and to have it cleared on reboot. 4) This one was the biggest one for me for awhile. Is firefox running in multi-process mode? If you don't know, got to about:support, and look in the Application Basics section, at the Multiprocess Windows entry and the Web Content Processes entry. When you have multiple windows open it should show something like 2/2 (for two windows open, tho you won't get 20/20 for 20 windows open) for windows, and n/7 (tho I believe the default is 4 instead of 7, I've upped mine) for content processes, with n going up toward 7 (or 4) if you have multiple tabs/windows open playing video or the like. If you're stuck at a single process that'll be a *BIG* drag on performance, particularly when playing youtube full-screen or the like. There are various reasons you might get stuck at a single process, including extensions that aren't compatible with "electrolysis" (aka e10s, this being the mozilla code name for multi-process firefox), and the one that was my problem after I ensured all my extensions were e10s compatible -- I was trying to run the upstream firefox binary, which is now pulseaudio-only (no more direct alsa support), with apulse as a pulseaudio substitute, and apulse is apparently single-process-only (forcing multi-process would crash the tabs as soon as I tried navigating away from about:whatever to anything remote). Once I figured that out I switched back to using the gentoo firefox ebuild and enabling the alsa USE flag instead of pulseaudio there. That got multiprocess working, and it was was *MUCH* more responsive, as I figured it should be! =:^) If you find you're stuck at single process (remember, check with at least two windows open) and need help with it, yell. Because it'll make a *HUGE* difference. -- Duncan - List replies preferred. No HTML msgs. "Every nonfree program has a lord, a master -- and if you use the program, he is your master." Richard Stallman -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html