Re: efficiency of btrfs cow

2011-03-23 Thread Kolja Dummann
> So it's clear that total usage (as reported by df) was 121,402,328KB but > Metadata has two values: > > Metadata: total=5.01GB, used=3.26GB > > What's the difference between total and used?  And for that matter, > what's the difference between the total and used for Data > (total=110.01GB, used=1

Re: efficiency of btrfs cow

2011-03-23 Thread Brian J. Murrell
On 11-03-23 11:53 AM, Chester wrote: > I'm not a developer, but I think it goes something like this: > btrfs doesn't write the filesystem on the entire device/partition at > format time, rather, it dynamically increases the size of the > filesystem as data is used. That's why formating a disk in bt

Re: efficiency of btrfs cow

2011-03-23 Thread Chester
I'm not a developer, but I think it goes something like this: btrfs doesn't write the filesystem on the entire device/partition at format time, rather, it dynamically increases the size of the filesystem as data is used. That's why formating a disk in btrfs can be so fast. On Wed, Mar 23, 2011 at

Re: efficiency of btrfs cow

2011-03-23 Thread Brian J. Murrell
On 11-03-06 11:06 AM, Calvin Walton wrote: > > To see exactly what's going on, you should use the "btrfs filesystem df" > command to see how space is being allocated for data and metadata > separately: OK. So with an empty filesystem, before my first copy (i.e. the base on which the next copy wi

Re: efficiency of btrfs cow

2011-03-06 Thread Freddie Cash
On Sun, Mar 6, 2011 at 8:02 AM, Fajar A. Nugraha wrote: > On Sun, Mar 6, 2011 at 10:46 PM, Brian J. Murrell > wrote: >> # cp -al /backup/previous-backup/ /backup/current-backup >> # rsync -aAHX ... --exclude /backup / /backup/current-backup >> >> The shortcoming of this of course is that it just

Re: efficiency of btrfs cow

2011-03-06 Thread Brian J. Murrell
On 11-03-06 11:02 AM, Fajar A. Nugraha wrote: > > If you have snapshots anyway, why not : > - create a snapshot before each backup run > - use the same directory (e.g. just /backup), no need to "cp" anything > - add "--inplace" to rsync Which is exactly what I am doing. There is no "cp" involved

Re: efficiency of btrfs cow

2011-03-06 Thread Brian J. Murrell
On 11-03-06 11:17 AM, Calvin Walton wrote: > > To add a bit to this: if you *do not* use the --inplace option on rsync, > rsync will rewrite the entire file, instead of updating the existing > file! Of course. As I mentioned to Fajar previously, I am indeed using --inplace when copying from the

Re: efficiency of btrfs cow

2011-03-06 Thread Calvin Walton
On Sun, 2011-03-06 at 23:02 +0700, Fajar A. Nugraha wrote: > On Sun, Mar 6, 2011 at 10:46 PM, Brian J. Murrell > wrote: > > # cp -al /backup/previous-backup/ /backup/current-backup > > # rsync -aAHX ... --exclude /backup / /backup/current-backup > > > > The shortcoming of this of course is that i

Re: efficiency of btrfs cow

2011-03-06 Thread Brian J. Murrell
On 11-03-06 11:06 AM, Calvin Walton wrote: > > There actually is such a periodic jump in overhead, Ahh. So my instincts were correct. > caused by the way > which btrfs dynamically allocates space for metadata as needed by the > creation of new files, which it does whenever the free metadata spa

Re: efficiency of btrfs cow

2011-03-06 Thread Calvin Walton
On Sun, 2011-03-06 at 10:46 -0500, Brian J. Murrell wrote: > I have a backup volume on an ext4 filesystem that is using rsync and > it's --link-dest option to create "hard-linked incremental" backups. I > am sure everyone here is familiar with the technique but in case anyone > isn't basically it'

Re: efficiency of btrfs cow

2011-03-06 Thread Fajar A. Nugraha
On Sun, Mar 6, 2011 at 10:46 PM, Brian J. Murrell wrote: > # cp -al /backup/previous-backup/ /backup/current-backup > # rsync -aAHX ... --exclude /backup / /backup/current-backup > > The shortcoming of this of course is that it just takes 1 byte in a > (possibly huge) file to require that the whol

efficiency of btrfs cow

2011-03-06 Thread Brian J. Murrell
I have a backup volume on an ext4 filesystem that is using rsync and it's --link-dest option to create "hard-linked incremental" backups. I am sure everyone here is familiar with the technique but in case anyone isn't basically it's effectively doing (each backup): # cp -al /backup/previous-backu