Re: Is metadata redundant over more than one drive with raid0 too?

2014-05-04 Thread Daniel Lee
On 05/04/2014 12:24 AM, Marc MERLIN wrote:
  
 Gotcha, thanks for confirming, so -m raid1 -d raid0 really only protects
 against metadata corruption or a single block loss, but otherwise if you
 lost a drive in a 2 drive raid0, you'll have lost more than just half
 your files.

 The scenario you mentioned at the beginning, if I lose a drive,
 I'll still have full metadata for the entire filesystem and only
 missing files is more applicable to using -m raid1 -d single.
 Single is not geared towards performance and, though it doesn't
 guarantee a file is only on a single disk, the allocation does mean
 that the majority of all files smaller than a chunk will be stored
 on only one disk or the other - not both.
 Ok, so in other words:
 -d raid0: if you one 1 drive out of 2, you may end up with small files
 and the rest will be lost

 -d single: you're more likely to have files be on one drive or the
 other, although there is no guarantee there either.

 Correct?

 Thanks,
 Marc
This often seems to confuse people and I think there is a common
misconception that the btrfs raid/single/dup features work at the file
level when in reality they work at a level closer to lvm/md.

If someone told you that they lost a device out of a jbod or multi disk
lvm group(somewhat analogous to -d single) with ext on top you would
expect them to lose data in any file that had a fragment in the lost
region (lets ignore metadata for a moment). This is potentially up to
100% of the files but this should not be a surprising result. Similarly,
someone who has lost a disk out of a md/lvm raid0 volume should not be
surprised to have a hard time recovering any data at all from it.

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Which companies are using Btrfs in production?

2014-04-24 Thread Daniel Lee
On 04/23/2014 06:19 PM, Marc MERLIN wrote:
 Oh while we're at it, are there companies that can say they are using btrfs
 in production?

 Marc
Netgear uses BTRFS as the filesystem in their refreshed ReadyNAS line.
They apparently use Oracle's linux distro so I assume they're relying on
them to do most of the heavy lifting as far as support BTRFS and
backporting goes since they're still on 3.0! They also have raid5/6
support so they are probably running BTRFS on top of md.

http://www.netgear.com/images/BTRFS%20on%20ReadyNAS%20OS%206_9May1318-76105.pdf

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Recovering from hard disk failure in a pool

2014-02-14 Thread Daniel Lee
On 02/14/2014 03:04 AM, Axelle wrote:
 Hi Hugo,

 Thanks for your answer.
 Unfortunately, I had also tried

 sudo mount -o degraded /dev/sdc1 /samples
 mount: wrong fs type, bad option, bad superblock on /dev/sdc1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail  or so

 and dmesg says:
 [ 1177.695773] btrfs: open_ctree failed
 [ 1247.448766] device fsid 545e95c6-d347-4a8c-8a49-38b9f9cb9add devid
 2 transid 31105 /dev/sdc1
 [ 1247.449700] device fsid 545e95c6-d347-4a8c-8a49-38b9f9cb9add devid
 1 transid 31105 /dev/sdc6
 [ 1247.458794] device fsid 545e95c6-d347-4a8c-8a49-38b9f9cb9add devid
 2 transid 31105 /dev/sdc1
 [ 1247.459601] device fsid 545e95c6-d347-4a8c-8a49-38b9f9cb9add devid
 1 transid 31105 /dev/sdc6
 [ 4013.363254] device fsid 545e95c6-d347-4a8c-8a49-38b9f9cb9add devid
 2 transid 31105 /dev/sdc1
 [ 4013.408280] btrfs: allowing degraded mounts
 [ 4013.555764] btrfs: bdev (null) errs: wr 0, rd 14, flush 0, corrupt 0, gen 0
 [ 4015.600424] Btrfs: too many missing devices, writeable mount is not allowed
 [ 4015.630841] btrfs: open_ctree failed
Did the crashed /dev/sdb have more than 1 partitions in your raid1
filesystem?

 Yes, I know, I'll probably be losing a lot of data, but it's not too
 much my concern because I had a backup (sooo happy about that :D). If
 I can manage to recover a little more on the btrfs volume it's bonus,
 but in the event I do not, I'll be using my backup.

 So, how do I fix my volume? I guess there would be a solution apart
 from scratching/deleting everything and starting again...


 Regards,
 Axelle



 On Fri, Feb 14, 2014 at 11:58 AM, Hugo Mills h...@carfax.org.uk wrote:
 On Fri, Feb 14, 2014 at 11:35:56AM +0100, Axelle wrote:
 Hi,
 I've just encountered a hard disk crash in one of my btrfs pools.

 sudo btrfs filesystem show
 failed to open /dev/sr0: No medium found
 Label: none  uuid: 545e95c6-d347-4a8c-8a49-38b9f9cb9add
 Total devices 3 FS bytes used 112.70GB
 devid1 size 100.61GB used 89.26GB path /dev/sdc6
 devid2 size 93.13GB used 84.00GB path /dev/sdc1
 *** Some devices missing

 The device which is missing is /dev/sdb. I have replaced it with a new
 hard disk. How do I add it back to the volume and fix the device
 missing?
 The pool is expected to mount to /samples (it is not mounted yet).

 I tried this - which fails:
 sudo btrfs device add /dev/sdb /samples
 ERROR: error adding the device '/dev/sdb' - Inappropriate ioctl for device

 Why isn't this working?
Because it's not mounted. :)

 I also tried this:
 sudo mount -o recovery /dev/sdc1 /samples
 mount: wrong fs type, bad option, bad superblock on /dev/sdc1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail  or so
 same with /dev/sdc6
Close, but what you want here is:

 mount -o degraded /dev/sdc1 /samples

 not recovery. That will tell the FS that there's a missing disk, and
 it should mount without complaining. If your data is not RAID-1 or
 RAID-10, then you will almost certainly have lost some data.

At that point, since you've removed the dead disk, you can do:

 btrfs device delete missing /samples

 which forcibly removes the record of the missing device.

Then you can add the new device:

 btrfs device add /dev/sdb /samples

And finally balance to repair the RAID:

 btrfs balance start /samples

It's worth noting that even if you have RAID-1 data and metadata,
 losing /dev/sdc in your current configuration is likely to cause
 severe data loss -- probably making the whole FS unrecoverable. This
 is because the FS sees /dev/sdc1 and /dev/sdc6 as independent devices,
 and will happily put both copies of a piece of RAID-1 data (or
 metadata) on /dev/sdc -- one on each of sdc1 and sdc6. I therefore
 wouldn't recommend running like that for very long.

Hugo.

 --
 === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
   PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
--- All hope abandon,  Ye who press Enter here. ---
 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Recovering from hard disk failure in a pool

2014-02-14 Thread Daniel Lee
On 02/14/2014 07:22 AM, Axelle wrote:
 Did the crashed /dev/sdb have more than 1 partitions in your raid1
 filesystem?
 No, only 1 - as far as I recall.

 -- Axelle.
What does:

btrfs filesystem df /samples

say now that you've mounted the fs readonly?
 On Fri, Feb 14, 2014 at 3:58 PM, Daniel Lee longinu...@gmail.com wrote:
 On 02/14/2014 03:04 AM, Axelle wrote:
 Hi Hugo,

 Thanks for your answer.
 Unfortunately, I had also tried

 sudo mount -o degraded /dev/sdc1 /samples
 mount: wrong fs type, bad option, bad superblock on /dev/sdc1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail  or so

 and dmesg says:
 [ 1177.695773] btrfs: open_ctree failed
 [ 1247.448766] device fsid 545e95c6-d347-4a8c-8a49-38b9f9cb9add devid
 2 transid 31105 /dev/sdc1
 [ 1247.449700] device fsid 545e95c6-d347-4a8c-8a49-38b9f9cb9add devid
 1 transid 31105 /dev/sdc6
 [ 1247.458794] device fsid 545e95c6-d347-4a8c-8a49-38b9f9cb9add devid
 2 transid 31105 /dev/sdc1
 [ 1247.459601] device fsid 545e95c6-d347-4a8c-8a49-38b9f9cb9add devid
 1 transid 31105 /dev/sdc6
 [ 4013.363254] device fsid 545e95c6-d347-4a8c-8a49-38b9f9cb9add devid
 2 transid 31105 /dev/sdc1
 [ 4013.408280] btrfs: allowing degraded mounts
 [ 4013.555764] btrfs: bdev (null) errs: wr 0, rd 14, flush 0, corrupt 0, 
 gen 0
 [ 4015.600424] Btrfs: too many missing devices, writeable mount is not 
 allowed
 [ 4015.630841] btrfs: open_ctree failed
 Did the crashed /dev/sdb have more than 1 partitions in your raid1
 filesystem?
 Yes, I know, I'll probably be losing a lot of data, but it's not too
 much my concern because I had a backup (sooo happy about that :D). If
 I can manage to recover a little more on the btrfs volume it's bonus,
 but in the event I do not, I'll be using my backup.

 So, how do I fix my volume? I guess there would be a solution apart
 from scratching/deleting everything and starting again...


 Regards,
 Axelle



 On Fri, Feb 14, 2014 at 11:58 AM, Hugo Mills h...@carfax.org.uk wrote:
 On Fri, Feb 14, 2014 at 11:35:56AM +0100, Axelle wrote:
 Hi,
 I've just encountered a hard disk crash in one of my btrfs pools.

 sudo btrfs filesystem show
 failed to open /dev/sr0: No medium found
 Label: none  uuid: 545e95c6-d347-4a8c-8a49-38b9f9cb9add
 Total devices 3 FS bytes used 112.70GB
 devid1 size 100.61GB used 89.26GB path /dev/sdc6
 devid2 size 93.13GB used 84.00GB path /dev/sdc1
 *** Some devices missing

 The device which is missing is /dev/sdb. I have replaced it with a new
 hard disk. How do I add it back to the volume and fix the device
 missing?
 The pool is expected to mount to /samples (it is not mounted yet).

 I tried this - which fails:
 sudo btrfs device add /dev/sdb /samples
 ERROR: error adding the device '/dev/sdb' - Inappropriate ioctl for device

 Why isn't this working?
Because it's not mounted. :)

 I also tried this:
 sudo mount -o recovery /dev/sdc1 /samples
 mount: wrong fs type, bad option, bad superblock on /dev/sdc1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail  or so
 same with /dev/sdc6
Close, but what you want here is:

 mount -o degraded /dev/sdc1 /samples

 not recovery. That will tell the FS that there's a missing disk, and
 it should mount without complaining. If your data is not RAID-1 or
 RAID-10, then you will almost certainly have lost some data.

At that point, since you've removed the dead disk, you can do:

 btrfs device delete missing /samples

 which forcibly removes the record of the missing device.

Then you can add the new device:

 btrfs device add /dev/sdb /samples

And finally balance to repair the RAID:

 btrfs balance start /samples

It's worth noting that even if you have RAID-1 data and metadata,
 losing /dev/sdc in your current configuration is likely to cause
 severe data loss -- probably making the whole FS unrecoverable. This
 is because the FS sees /dev/sdc1 and /dev/sdc6 as independent devices,
 and will happily put both copies of a piece of RAID-1 data (or
 metadata) on /dev/sdc -- one on each of sdc1 and sdc6. I therefore
 wouldn't recommend running like that for very long.

Hugo.

 --
 === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
   PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
--- All hope abandon,  Ye who press Enter here. ---
 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs

Re: Recovering from hard disk failure in a pool

2014-02-14 Thread Daniel Lee
On 02/14/2014 09:53 AM, Axelle wrote:
 Hi Daniel,

 This is what it answers now:

 sudo btrfs filesystem df /samples
 [sudo] password for axelle:
 Data, RAID0: total=252.00GB, used=108.99GB
 System, RAID1: total=8.00MB, used=28.00KB
 System: total=4.00MB, used=0.00
 Metadata, RAID1: total=5.25GB, used=3.71GB
So the issue here is that your data is raid0 which will not tolerate any
loss of a device. I'd recommend trashing the current filesystem and
creating a new one with some redundancy (use raid1 not raid0, don't add
more than one partition from the same disk to a btrfs filesystem, etc.)
so you can recover from this sort of scenario in the future. To do this,
use wipefs on the remaining partitions to remove all traces of the
current btrfs filesystem.

 By the way, I was happy to recover most of my data :)

This is the nice thing about the checksumming in btrfs, knowing that
what data you did read off is correct. :)

 Of course, I still can't add my new /dev/sdb to /samples because it's 
 read-only:
 sudo btrfs device add /dev/sdb /samples
 ERROR: error adding the device '/dev/sdb' - Read-only file system

 Regards
 Axelle

 On Fri, Feb 14, 2014 at 5:19 PM, Daniel Lee longinu...@gmail.com wrote:
 On 02/14/2014 07:22 AM, Axelle wrote:
 Did the crashed /dev/sdb have more than 1 partitions in your raid1
 filesystem?
 No, only 1 - as far as I recall.

 -- Axelle.
 What does:

 btrfs filesystem df /samples

 say now that you've mounted the fs readonly?
 On Fri, Feb 14, 2014 at 3:58 PM, Daniel Lee longinu...@gmail.com wrote:
 On 02/14/2014 03:04 AM, Axelle wrote:
 Hi Hugo,

 Thanks for your answer.
 Unfortunately, I had also tried

 sudo mount -o degraded /dev/sdc1 /samples
 mount: wrong fs type, bad option, bad superblock on /dev/sdc1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail  or so

 and dmesg says:
 [ 1177.695773] btrfs: open_ctree failed
 [ 1247.448766] device fsid 545e95c6-d347-4a8c-8a49-38b9f9cb9add devid
 2 transid 31105 /dev/sdc1
 [ 1247.449700] device fsid 545e95c6-d347-4a8c-8a49-38b9f9cb9add devid
 1 transid 31105 /dev/sdc6
 [ 1247.458794] device fsid 545e95c6-d347-4a8c-8a49-38b9f9cb9add devid
 2 transid 31105 /dev/sdc1
 [ 1247.459601] device fsid 545e95c6-d347-4a8c-8a49-38b9f9cb9add devid
 1 transid 31105 /dev/sdc6
 [ 4013.363254] device fsid 545e95c6-d347-4a8c-8a49-38b9f9cb9add devid
 2 transid 31105 /dev/sdc1
 [ 4013.408280] btrfs: allowing degraded mounts
 [ 4013.555764] btrfs: bdev (null) errs: wr 0, rd 14, flush 0, corrupt 0, 
 gen 0
 [ 4015.600424] Btrfs: too many missing devices, writeable mount is not 
 allowed
 [ 4015.630841] btrfs: open_ctree failed
 Did the crashed /dev/sdb have more than 1 partitions in your raid1
 filesystem?
 Yes, I know, I'll probably be losing a lot of data, but it's not too
 much my concern because I had a backup (sooo happy about that :D). If
 I can manage to recover a little more on the btrfs volume it's bonus,
 but in the event I do not, I'll be using my backup.

 So, how do I fix my volume? I guess there would be a solution apart
 from scratching/deleting everything and starting again...


 Regards,
 Axelle



 On Fri, Feb 14, 2014 at 11:58 AM, Hugo Mills h...@carfax.org.uk wrote:
 On Fri, Feb 14, 2014 at 11:35:56AM +0100, Axelle wrote:
 Hi,
 I've just encountered a hard disk crash in one of my btrfs pools.

 sudo btrfs filesystem show
 failed to open /dev/sr0: No medium found
 Label: none  uuid: 545e95c6-d347-4a8c-8a49-38b9f9cb9add
 Total devices 3 FS bytes used 112.70GB
 devid1 size 100.61GB used 89.26GB path /dev/sdc6
 devid2 size 93.13GB used 84.00GB path /dev/sdc1
 *** Some devices missing

 The device which is missing is /dev/sdb. I have replaced it with a new
 hard disk. How do I add it back to the volume and fix the device
 missing?
 The pool is expected to mount to /samples (it is not mounted yet).

 I tried this - which fails:
 sudo btrfs device add /dev/sdb /samples
 ERROR: error adding the device '/dev/sdb' - Inappropriate ioctl for 
 device

 Why isn't this working?
Because it's not mounted. :)

 I also tried this:
 sudo mount -o recovery /dev/sdc1 /samples
 mount: wrong fs type, bad option, bad superblock on /dev/sdc1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail  or so
 same with /dev/sdc6
Close, but what you want here is:

 mount -o degraded /dev/sdc1 /samples

 not recovery. That will tell the FS that there's a missing disk, and
 it should mount without complaining. If your data is not RAID-1 or
 RAID-10, then you will almost certainly have lost some data.

At that point, since you've removed the dead disk, you can do:

 btrfs device delete missing /samples

 which forcibly removes the record of the missing device.

Then you can add the new device:

 btrfs device add /dev/sdb /samples

And finally

Re: kernel 3.3.4 damages filesystem (?)

2012-05-07 Thread Daniel Lee

On 05/07/2012 10:52 AM, Helmut Hullen wrote:

Hallo, Felix,

Du meintest am 07.05.12:


I'm just going back to ext4 - then one broken disk doesn't disturb
the contents of the other disks.



?! If you use raid0 one broken disk will always disturb the contents
of the other disks, that is what raid0 does, no matter what
filesystem you use.


Yes - I know. But btrfs promises that I can add bigger disks and delete
smaller disks on the fly. For something like a video collection which
will grow on and on an interesting feature. And such a (big) collection
does need a gradfather-father-son backup, that's no critical data.

With a file system like ext2/3/4 I can work with several directories
which are mounted together, but (as said before) one broken disk doesn't
disturb the others.



How can you do that with ext2/3/4? If you mean create several different 
filesystems and mount them separately then that's very different from 
your current situation. What you did in this case is comparable to 
creating a raid0 array out of your disks. I don't see how an ext 
filesystem is going to work any better if one of the disks drops out 
than with a btrfs filesystem. Using -d single isn't going to be of much 
use in this case either because that's like spanning a lvm volume over 
several disks and then putting ext over that, it's pretty 
nondeterministic how much you'll actually save should a large chunk of 
the filesystem suddenly disappear.


It sounds like what you're thinking of is creating several separate ext 
filesystems and then just mounting them separately. There's nothing 
inherently special about doing this with ext, you can can do the same 
thing with btrfs and it would amount to about the same level of 
protection (potentially more if you consider [meta]data checksums 
important but potentially less if you feel that ext is more robust for 
whatever reason).


If you want to survive losing a single disk without the (absolute) fear 
of the whole filesystem breaking you have to have some sort of 
redundancy either by separating filesystems or using some version of 
raid other than raid0. I suppose the volume management of btrfs is sort 
of confusing at the moment but when btrfs promises you can remove disks 
on the fly it doesn't mean you can just unplug disks from a raid0 
without telling btrfs to put that data elsewhere first.

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kernel 3.3.4 damages filesystem (?)

2012-05-07 Thread Daniel Lee

On 05/07/2012 01:21 PM, Helmut Hullen wrote:

Hallo, Daniel,

Du meintest am 07.05.12:


Yes - I know. But btrfs promises that I can add bigger disks and
delete smaller disks on the fly. For something like a video
collection which will grow on and on an interesting feature. And
such a (big) collection does need a gradfather-father-son backup,
that's no critical data.

With a file system like ext2/3/4 I can work with several directories
which are mounted together, but (as said before) one broken disk
doesn't disturb the others.



How can you do that with ext2/3/4? If you mean create several
different filesystems and mount them separately then that's very
different from your current situation. What you did in this case is
comparable to creating a raid0 array out of your disks. I don't see
how an ext filesystem is going to work any better if one of the disks
drops out than with a btrfs filesystem.


   mkfs.btrfs  -m raid1 -d raid0

with 3 disks gives me a cluster which looks like 1 disk/partition/
directory.
If one disk fails nothing is usable.


How is that different from putting ext on top of a raid0?



(Yes - I've read Hugo's explanation of -d single, I'll try this way)

With ext2/3/4 I mount 2 disks/partitions into the first disk. If one
disk fails the contents of the 2 other disks is still readable,


There is nothing that prevents you from using this strategy with btrfs.




It sounds like what you're thinking of is creating several separate
ext filesystems and then just mounting them separately.


Yes - that's the old way. It's reliable but ugly.


There's nothing inherently special about doing this with ext, you can
do the same thing with btrfs and it would amount to about the same
level of protection (potentially more if you consider [meta]data
checksums important but potentially less if you feel that ext is more
robust for whatever reason).


No - as just mentionend: there's a big difference when one disk fails.


No there isn't.




If you want to survive losing a single disk without the (absolute)
fear of the whole filesystem breaking you have to have some sort of
redundancy either by separating filesystems or using some version of
raid other than raid0.


No - since some years I use a kind of outsourced backup. A copy of all
data is on a bundle of disks somewhere in the neighbourhood. As
mentionend: the data isn't business critical, it's just nice to have.
It's not worth something like raid1 or so (with twice the costs of a non
raid solution).


I suppose the volume management of btrfs is
sort of confusing at the moment but when btrfs promises you can
remove disks on the fly it doesn't mean you can just unplug disks
from a raid0 without telling btrfs to put that data elsewhere first.


No - it's not confusing. It only needs a kind of recipe and much time:

 btrfs device add ...
 btrfs filesystem balance ... (perhaps no necessary)
 btrfs device delete ...
 btrfs filesystem balance ... (perhaps not necessary)

No intellectual challenge.
And completely different to hot pluggable.


This is no different to any raid0 or spanning disk setup that allows 
growing/shrinking of the array.


--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: filesystem full when it's not? out of inodes? huh?

2012-02-26 Thread Daniel Lee

On 02/25/2012 05:55 PM, Brian J. Murrell wrote:

$ btrfs filesystem df /usr
Data: total=3.22GB, used=3.22GB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=896.00MB, used=251.62MB
Metadata: total=8.00MB, used=0.00

I don't know if that's useful or not.

Any ideas?

Cheers
b.

3.22GB + (896MB * 2) = 5GB

There's no mystery here, you're simply out of space. The system df 
command basically doesn't understand btrfs so will erroneously report 
free space if there isn't any.


--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: filesystem full when it's not? out of inodes? huh?

2012-02-26 Thread Daniel Lee

On 02/26/2012 11:48 AM, Brian J. Murrell wrote:

On 12-02-26 02:37 PM, Daniel Lee wrote:

3.22GB + (896MB * 2) = 5GB

There's no mystery here, you're simply out of space.

Except the mystery that I had to expand the filesystem to something
between 20GB and 50GB in order to complete the operation, after which I
could reduce it back down to 5GB.

Cheers,
b.

What's mysterious about that? When you shrink it btrfs is going to throw 
away unused data to cram it all in the requested space and you had empty 
space that was taken up by the metadata allocation. Did you compare 
btrfs fi df after you shrank it with before?

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: filesystem full when it's not? out of inodes? huh?

2012-02-26 Thread Daniel Lee
On 02/26/2012 12:05 PM, Brian J. Murrell wrote:
 On 12-02-26 02:52 PM, Daniel Lee wrote:
 What's mysterious about that?
 What's mysterious about needing to grow the filesystem to over 20GB to
 unpack 10MB of (small, so yes, many) files?
 When you shrink it btrfs is going to throw
 away unused data to cram it all in the requested space and you had empty
 space that was taken up by the metadata allocation.
 The shrinking is secondary mystery.  It's the need for more than 20GB of
 space for less than 3GB of files that's the major mystery.
Several people in this list have already answered this question but here
goes.

Btrfs isn't like other more common filesystems where metadata is fixed
at filesystem creation. Rather, metadata allocations happen just like
data allocations do. Btrfs also tries to allocate metadata in big chunks
so it doesn't get fragmented and lead to slowdowns when doing something
like running du on the root folder. The downside to all of this is that
it's not very friendly to small filesystems, in your case it allocated
some 1.8 GB of metadata of which only 500 MB was actually in use.

In the future you can create your filesystem with metadata=single to
free up more space for regular data or look into forcing the mixed block
groups mode which is normally only enabled for 1GB or smaller
filesystems. Mixed block group mode can't be switched off so you could
make a really tiny FS, several hunder MB or so, and then just grow it to
whatever size you want. The btrfs wiki seems to define small filesystems
as anything under 16GB so might be a good lower bound for actually using
btrfs in a day to day environment.


--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html