Re: What is best practice when partitioning a device that holds one or more btr-filesystems
On Wed, Dec 14, 2011 at 1:42 PM, Wilfred van Velzen wrote: > On Wed, Dec 14, 2011 at 9:56 PM, Gareth Pye wrote: >> On Thu, Dec 15, 2011 at 5:51 AM, Wilfred van Velzen >> wrote: >>> >>> (I'm not interested in what early adopter users do when they are using >>> rc kernels...) >> >> Yet your going to use a FS without a working fsck? That puts you in early >> adopter territory to me. > > Yeah maybe. But I'm still not interested in it regarding partitioning! ;) > > But actually I decided not to use it for the production environment. > The missing working fsck is one of the reasons. > Although opensuse supports it and Suse Linux Enterprise Server 11 is > going to support it with their next SP release in Februari, and Fedora > might use it as default in their next release... Did I miss any? MeeGo has been using btrfs by default right from the start. In the current versions, we even install everything in a single btrfs partition, and use 2 subvolumes (/home and /), and create a factory reset snapshot of the / filesystem at installation. Auke -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: (renamed thread) btrfs metrics
On Wed, Jan 4, 2012 at 3:48 AM, Daniel Pocock wrote: > >>> I am looking at what metrics are needed to monitor btrfs in production. >>> I actually look after the ganglia-modules-linux package, which includes >>> some FS space metrics, but I figured that btrfs throws all that out the >>> window. >>> >>> Can you suggest metrics that would be meaningful, do I look in /proc or >>> with syscalls, is there any code I should look at for an example of how >>> to extract them with C? Ideally, Ganglia runs without root privileges >>> too, so please let me know if btrfs will allow me to access them >> >> It depends on what you want to know, really. If you want "how close >> am I to a full filesystem?", then the output of df will give you a >> measure, even if it could be up to a factor of 2 out -- you can use it >> for predictive planning, though, as it'll be near zero when the FS >> runs out of space. > > > Maybe if you look at it from the point of the sysadmin and think about > what questions he might want to ask: > > a) how much space would I reclaim if I deleted snapshot X? > > b) how much space would I reclaim if I deleted all snapshots? > > c) how much space would I need if I start making 4 snapshots a day and > keeping them for 48 hours? chiming in on the discussion - what I'd like to personally see: First, probably easiest: Display per subvol the space used that is "unique" (not used by other subvolumes), and shared (the opposite - all blocks that appear in other subvolumes as well). >From there on, one could potentially create a matrix: (proportional font art, apologies): | subvol1 | subvol2 | subvol3 | --+--+--+--+ subvol1 | 200M | 20M | 50M | --+--+--+--+ subvol2 |20M |350M | 22M | --+--+--+--+ subvol3 |50M | 22M |634M | --+--+--+--+ The diagonal obviously shows the "unique" blocks, subvol2 and subvol1 share 20M data, etc. Missing from this plot would be "how much is shared between subvol1, subvol2, and subvol3" together, but it's a start and not something that hard to understand. One might add a column for "total size" of each subvol, which may obviously not be an addition of the rest of the columns in this diagram. Anyway, something like this would be high on my list of `df` numbers I'd like to see - since I think they are useful numbers. Cheers, Auke -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: BTRFS development
On Sun, Jan 8, 2012 at 8:08 AM, debit2...@gmail.com wrote: > Hi Everyone in this list, > > I am very new in this mailing list. I want to get involved into BTRFS > file system development. > But I could not figure out the starting pointer from which I start > looking into the code. > > Is it right to start from mkfs.btrfs source code specially from mkfs.c? perhaps, but the obvious place is fs/btrfs/ in the linux kernel sources. Auke -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [RFB] add LZ4 compression method to btrfs
On Tue, Feb 14, 2012 at 1:47 PM, Hugo Chevrain wrote: >> >> Are you sure about these figures ? the difference seems too large. It's > almost >> unbelievable. >> >> -- > > You should not, > Mark Ruijter found the same for LessFS (http://www.lessfs.com/wordpress/? > p=688) and there is also such finding into an Hadoop thread > (https://scribe.twitter.com/#!/otisg/status/148848850914902016) The first link only shows results, not data. The second link is dead and just shows a dead twitter page, in two browsers. Science isn't hard folks! Just post the raw numbers so people can verify the results. Cheers, Auke -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
btrfs-progs: plea for a new release tarball
Chris, I'm one of those few people with several hats on, and one of them is a packager/distro builder. From that perspective, btrfs-progs is rather awkward to work with at this time, because as a packager, we like to work with the tarballs, not git. And currently, the latest tarball out there is 0.19, and it doesn't even compile on gcc-4.6 due to -Werror=unused-but-set-variable being default in that version. On top of that. 0.19 doesn't contain the 'btrfs' tool. It's... also rather old already. There's an old kernel.org mirror that has it, and, it has a 2009 date stamp. Now, I know I can get it out of git, but, I don't want to. There is not a single distro (well, realistic one anyway) that pulls all it sources from git. So, kindly, I ask you to do a btrfs-progs-0.20 release that includes the btrfs tool. And post it back at kernel.org/pub/linux/people/mason ? pleeeaaase ? I'd be ecstatic. Thanks :) Auke -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [systemd-devel] systemd-udevd: excessive I/O usage
On Mon, Jun 4, 2012 at 8:50 PM, Alexander E. Patrakov wrote: > 2012/6/5 Kok, Auke-jan H wrote on systemd-devel > list: >> It seems your system is taking well into 15+ seconds before btrfs is >> actually *ready* on your system, which seems to be the main hiccup >> (note, speculation here). I've personally become a bit displeased with >> btrfs performance recently myself, so, I'm wondering if you should try >> ext4 for now. >> >> Other than that, after btrfs/udev finally pops to life, things seem to >> start relatively quickly. > > I think btrfs is to blame here, because I think my system started to > be affected by this problem after ext4 -> btrfs conversion. > > I recently changed my ext4-on-lvm gentoo system at home by > defragmenting the LVM (http://bisqwit.iki.fi/source/lvm2defrag.html), > converting the biggest (200 GB, 80% used, mostly video files, git > trees and SVN checkouts of various projects) logical volume to btrfs, > making the backup of metadata, deleting the LVM partition and creating > ordinary partitions in places that were occupied by LVM volumes before > according to the backup. It worked. Then I made a btrfs subvolume and > transferred the contents of the former root partition there using tar. > So now I have two copies of my root filesystem - one on ext4 and one > on btrfs. I recreated an initramfs for each of them using dracut. > Result: boot from ext4 takes less than 15 seconds, while boot from > btrfs takes 9 minutes (or 5 minutes if I disable readahead - the data > file is not valid anyway on btrfs). > > One problem is btrfsck in the dracut-created initramfs - it fires > every time (with btrfs mounted read-only?). The other problem is the > btrfs-cache-1 kernel thread - I was told on #btrfs that it is a > one-time thing, but apparently it wants to do its caching every boot > due to some breakage. During the boot, there are also warnings about > hung tasks with some locks held. > > I am attaching a dmesg file illustrating all of the problems mentioned above. I've had the same (bad) experiences since 3.3. The "one time thing" is creating the free space cache. On my home systems, with 3.4.x, it's still creating them *every* boot, which certainly accounts for IO busy, which on a sluggish spinning rust is disastrous, to say the least. Hung tasks in btrfs have been present since it was merged. Remember that they're only a warning - eventially almost always they will unhang. But still ver frustrating. I'm currently dropping btrfs from my home development system because I've spent too much time in the last month trying to get my btrfs volumes back up after a kernel upgrade. So, I feel your pain. Auke > > -- > Alexander E. Patrakov -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
extremely slow syncing on btrfs with 2.6.39.1
I've been monitoring the lists for a while now but didn't see this problem mentioned in particular: I've got a fairly standard desktop system at home, 700gb WD drive, nothing special, with 2 btrfs filesystems and some snapshots. The system runs for days, and I've noticed unusual disk activity the other evening - turns out that it's taking forever to sync(). $ uname -r 2.6.39.1 $ grep btrfs /proc/mounts /dev/root / btrfs rw,relatime 0 0# is /dev/sdb2 # /dev/sdb5 /home btrfs rw,relatime 0 0 $ time sync real1m5.552s user0m0.000s sys 0m2.102s $ time sync real1m16.830s user0m0.001s sys 0m1.490s $ df -h / /home Filesystem Size Used Avail Use% Mounted on /dev/root47G 33G 7.7G 82% / /dev/sdb5 652G 216G 421G 34% /home $ btrfs fi df / Data: total=35.48GB, used=29.86GB System, DUP: total=16.00MB, used=12.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=4.50GB, used=1.67GB $ btrfs fi df /home Data: total=310.01GB, used=209.53GB System, DUP: total=8.00MB, used=48.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=11.00GB, used=2.98GB Metadata: total=8.00MB, used=0.00 I'll switch to 3.0 soon, but, given the fact that we're going to be running MeeGo on 2.6.39 probably for a while, I was wondering if anyone knows off the top of their heads if this issue is known/identified. If not then I'll need to make someone do some patching ;). Auke -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: BTRFS Duplicated UUID on /home
On Mon, Jul 25, 2011 at 5:10 AM, Bryce Myers wrote: > I have 4 partitions on my hard drive > 1 = /boot on ext2 > 2 = Swap > 3 = / on brtfs > 4 = /home on btrfs > > The uuid on partition 3 on my last boot was cloned to partition 4, so when I > try to mount either 3 or 4 they both mount the / partition. > > We tried 7182011 git, and the current version in the ARCH 1062010, and > neither had an option for resetting the UUID that we could find. Both > partitions were btrfsck'd and returned no errors. Resetting the UUID on btrfs isn't a quick-and-easy thing - you have to walk the entire tree and change every object. We've got a bad-hack in meego that uses btrfs-debug-tree and changes the UUID while it runs the entire tree, but it's ugly as hell. You shouldn't clone btrfs really, just make a new filesystem. Auke -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html