Change small filesystem to normal

2012-07-22 Thread Swâmi Petaramesh
Hi,

I've created a small BTRFS filesystem, where metadata+data are mixed
(and metadata are not DUP'ed).

Then I've enlarged the FS to 1 GB ; now I'd like to make it normal
with separate data and metadata, and DUP'ed metadata.

Is there a way tp do this without reformatting the FS ?

TIA, kind regards.

-- 
Swâmi Petaramesh sw...@petaramesh.org http://petaramesh.org PGP 9076E32E
Ne cherchez pas : Je ne suis pas sur Facebook.

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Change small filesystem to normal

2012-07-22 Thread Ilya Dryomov
On Sun, Jul 22, 2012 at 05:06:24PM +0200, Swâmi Petaramesh wrote:
 Hi,
 
 I've created a small BTRFS filesystem, where metadata+data are mixed
 (and metadata are not DUP'ed).
 
 Then I've enlarged the FS to 1 GB ; now I'd like to make it normal
 with separate data and metadata, and DUP'ed metadata.
 
 Is there a way tp do this without reformatting the FS ?

No, currently there is no way to do this.  You'll have to create a new
filesystem with mkfs.btrfs.

Thanks,

Ilya
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Change small filesystem to normal

2012-07-22 Thread Roman Mamedov
On Sun, 22 Jul 2012 17:06:24 +0200
Swâmi Petaramesh sw...@petaramesh.org wrote:

 Hi,
 
 I've created a small BTRFS filesystem, where metadata+data are mixed
 (and metadata are not DUP'ed).
 
 Then I've enlarged the FS to 1 GB ; now I'd like to make it normal
 with separate data and metadata, and DUP'ed metadata.

Considering the metadata overallocation bug [1] is still not fixed even in the
latest kernels and no one seems to care all that much, I would not recommend
doing that.

Personally I now use a mixed filesystem on a 1TB disk without any problems,
and do not think there's anything wrong with mixed. In fact there's been
some talk of moving to the mixed mode allocation to be used by default, and
maybe even removing support for the split mode: see [2].

[1] http://comments.gmane.org/gmane.comp.file-systems.btrfs/17848

[2] http://kerneltrap.org/mailarchive/linux-btrfs/2010/10/29/6885925


-- 
With respect,
Roman

~~~
Stallman had a printer,
with code he could not see.
So he began to tinker,
and set the software free.


signature.asc
Description: PGP signature


Re: brtfs on top of dmcrypt with SSD - ssd or nossd + crypt performance?

2012-07-22 Thread Marc MERLIN
I'm still getting a bit more data before updating the btrfs wiki with
my best recommendations for today.

First, everything I've read so far says that the ssd btrfs mount option
makes btrfs slower in benchmarks.
What gives? 
Anyone using it or know of a reason not to mount my ssd with nossd?


Next, I got a new Samsumg 830 512GB SSD which is supposed to be very
high performance.
The raw device seems fast enough on a quick hdparm test:


But once I encrypt it, it drops to 5 times slower than my 1TB spinning
disk in the same laptop:
gandalfthegreat:~# hdparm -tT /dev/mapper/ssdcrypt 
/dev/mapper/ssdcrypt:
 Timing cached reads:   15412 MB in  2.00 seconds = 7715.37 MB/sec
 Timing buffered disk reads:  70 MB in  3.06 seconds =  22.91 MB/sec 

gandalfthegreat:~# hdparm -tT /dev/mapper/cryptroot (spinning disk)
/dev/mapper/cryptroot:
 Timing cached reads:   16222 MB in  2.00 seconds = 8121.03 MB/sec
 Timing buffered disk reads: 308 MB in  3.01 seconds = 102.24 MB/sec 


The non encrypted SSD device gives me:
/dev/sda4:
 Timing cached reads:   14258 MB in  2.00 seconds = 7136.70 MB/sec
 Timing buffered disk reads: 1392 MB in  3.00 seconds = 463.45 MB/sec

which is 4x faster than my non encrypted spinning disk, as expected.
I used aes-xts-plain as recommended on
http://www.mayrhofer.eu.org/ssd-linux-benchmark

gandalfthegreat:~# cryptsetup status /dev/mapper/ssdcrypt
/dev/mapper/ssdcrypt is active.
  type:LUKS1
  cipher:  aes-xts-plain
  keysize: 256 bits
  device:  /dev/sda4
  offset:  4096 sectors
  size:926308752 sectors
  mode:read/write
gandalfthegreat:~# lsmod |grep -e aes
aesni_intel50443  66 
cryptd 14517  18 ghash_clmulni_intel,aesni_intel
aes_x86_64 16796  1 aesni_intel

I realize this is not btrfs' fault :) although I'm a bit stuck
as to how to benchmark btrfs if the underlying encrypted device is so
slow :)

Thanks,
Marc
-- 
A mouse is a device used to point at the xterm you want to type in - A.S.R.
Microsoft is to operating systems 
   what McDonalds is to gourmet cooking
Home page: http://marc.merlins.org/
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: brtfs on top of dmcrypt with SSD - ssd or nossd + crypt performance?

2012-07-22 Thread Martin Steigerwald
Hi Marc,

Am Sonntag, 22. Juli 2012 schrieb Marc MERLIN:
 I'm still getting a bit more data before updating the btrfs wiki with
 my best recommendations for today.
 
 First, everything I've read so far says that the ssd btrfs mount option
 makes btrfs slower in benchmarks.
 What gives?
 Anyone using it or know of a reason not to mount my ssd with nossd?
 
 
 Next, I got a new Samsumg 830 512GB SSD which is supposed to be very
 high performance.
 The raw device seems fast enough on a quick hdparm test:
 
 
 But once I encrypt it, it drops to 5 times slower than my 1TB spinning
 disk in the same laptop:
 gandalfthegreat:~# hdparm -tT /dev/mapper/ssdcrypt
 /dev/mapper/ssdcrypt:
  Timing cached reads:   15412 MB in  2.00 seconds = 7715.37 MB/sec
  Timing buffered disk reads:  70 MB in  3.06 seconds =  22.91 MB/sec
 
 
 gandalfthegreat:~# hdparm -tT /dev/mapper/cryptroot (spinning disk)
 /dev/mapper/cryptroot:
  Timing cached reads:   16222 MB in  2.00 seconds = 8121.03 MB/sec
  Timing buffered disk reads: 308 MB in  3.01 seconds = 102.24 MB/sec
 

Have you looked whether certain kernel threads are eating CPU?

I would have a look at this.

Or use atop to have a complete system overview during the hdparm run. You 
may want to use its default 10 seconds delay.

Anyway, hdparm is only a very rough measurement. (Test time 2 / 3 seconds 
is really short.)

Did you repeat tests three or five times and looked at the deviation?

For what it is worth I can beat that with ecryptfs on top of Ext4 ontop of 
an Intel SSD 320 (SATA 300 based):

martin@merkaba:~ su - ms
Passwort: 
ms@merkaba:~ df -hT .
DateisystemTyp  Größe Benutzt Verf. Verw% Eingehängt auf
/home/.ms  ecryptfs  224G211G   11G   96% /home/ms
ms@merkaba:~ dd if=/dev/zero of=testfile bs=1M count=1000 conv=fsync
1000+0 Datensätze ein
1000+0 Datensätze aus
1048576000 Bytes (1,0 GB) kopiert, 20,1466 s, 52,0 MB/s
ms@merkaba:~ rm testfile 
ms@merkaba:~ sudo fstrim /home
[sudo] password for ms: 
ms@merkaba:~

Thats way slower than a dd without encryption, but its way faster than 
your hdparm figures. The SSD was underutilized according to the harddisk 
LED of this ThinkPad T520 with Intel i5 Sandybridge 2.5 GHz dualcore. Did 
start atop to late to see whats going on.

(I did not yet test ecryptfs on top of BTRFS, but you didn´t test a 
filesystem with hdparm anyway.)

Thanks,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: brtfs on top of dmcrypt with SSD - ssd or nossd + crypt performance?

2012-07-22 Thread Martin Steigerwald
Am Sonntag, 22. Juli 2012 schrieb Martin Steigerwald:
 Hi Marc,
 
 Am Sonntag, 22. Juli 2012 schrieb Marc MERLIN:
  I'm still getting a bit more data before updating the btrfs wiki with
  my best recommendations for today.
  
  First, everything I've read so far says that the ssd btrfs mount
  option makes btrfs slower in benchmarks.
  What gives?
  Anyone using it or know of a reason not to mount my ssd with nossd?
  
  
  Next, I got a new Samsumg 830 512GB SSD which is supposed to be very
  high performance.
  The raw device seems fast enough on a quick hdparm test:
  
  
  But once I encrypt it, it drops to 5 times slower than my 1TB
  spinning disk in the same laptop:
  gandalfthegreat:~# hdparm -tT /dev/mapper/ssdcrypt
  
  /dev/mapper/ssdcrypt:
   Timing cached reads:   15412 MB in  2.00 seconds = 7715.37 MB/sec
   Timing buffered disk reads:  70 MB in  3.06 seconds =  22.91 MB/sec
  
  
  
  gandalfthegreat:~# hdparm -tT /dev/mapper/cryptroot (spinning disk)
  
  /dev/mapper/cryptroot:
   Timing cached reads:   16222 MB in  2.00 seconds = 8121.03 MB/sec
   Timing buffered disk reads: 308 MB in  3.01 seconds = 102.24 MB/sec
  
  
 
 Have you looked whether certain kernel threads are eating CPU?
 
 I would have a look at this.
 
 Or use atop to have a complete system overview during the hdparm run.
 You may want to use its default 10 seconds delay.
 
 Anyway, hdparm is only a very rough measurement. (Test time 2 / 3
 seconds is really short.)
 
 Did you repeat tests three or five times and looked at the deviation?
 
 For what it is worth I can beat that with ecryptfs on top of Ext4 ontop
 of an Intel SSD 320 (SATA 300 based):
 
 martin@merkaba:~ su - ms
 Passwort:
 ms@merkaba:~ df -hT .
 DateisystemTyp  Größe Benutzt Verf. Verw% Eingehängt auf
 /home/.ms  ecryptfs  224G211G   11G   96% /home/ms
 ms@merkaba:~ dd if=/dev/zero of=testfile bs=1M count=1000 conv=fsync
 1000+0 Datensätze ein
 1000+0 Datensätze aus
 1048576000 Bytes (1,0 GB) kopiert, 20,1466 s, 52,0 MB/s
 ms@merkaba:~ rm testfile
 ms@merkaba:~ sudo fstrim /home
 [sudo] password for ms:
 ms@merkaba:~
 
 Thats way slower than a dd without encryption, but its way faster than
 your hdparm figures. The SSD was underutilized according to the
 harddisk LED of this ThinkPad T520 with Intel i5 Sandybridge 2.5 GHz
 dualcore. Did start atop to late to see whats going on.
 
 (I did not yet test ecryptfs on top of BTRFS, but you didn´t test a
 filesystem with hdparm anyway.)


You measured read speed. So here read speed as well:

martin@merkaba:~#1 su - ms
Passwort: 
ms@merkaba:~ LANG=C df -hT .
Filesystem Type  Size  Used Avail Use% Mounted on
/home/.ms  ecryptfs  224G  211G   11G  96% /home/ms
ms@merkaba:~ LANG=C df -hT /home
Filesystem   Type  Size  Used Avail Use% Mounted on
/dev/mapper/merkaba-home ext4  224G  211G   11G  96% /home


ms@merkaba:~ dd if=/dev/zero of=testfile bs=1M count=1000 conv=fsync
1000+0 Datensätze ein
1000+0 Datensätze aus
1048576000 Bytes (1,0 GB) kopiert, 19,4592 s, 53,9 MB/s


ms@merkaba:~ sync ; su -c echo 3  /proc/sys/vm/drop_caches ; dd 
if=testfile of=/dev/null bs=1M count=1000
Passwort: 
1000+0 Datensätze ein
1000+0 Datensätze aus
1048576000 Bytes (1,0 GB) kopiert, 12,4633 s, 84,1 MB/s
ms@merkaba:~

ms@merkaba:~ sync ; su -c echo 3  /proc/sys/vm/drop_caches ; dd 
if=testfile of=/dev/null bs=1M count=1000
Passwort: 
1000+0 Datensätze ein
1000+0 Datensätze aus
1048576000 Bytes (1,0 GB) kopiert, 12,0747 s, 86,8 MB/s


ms@merkaba:~ sync ; su -c echo 3  /proc/sys/vm/drop_caches ; dd 
if=testfile of=/dev/null bs=1M count=1000
Passwort: 
1000+0 Datensätze ein
1000+0 Datensätze aus
1048576000 Bytes (1,0 GB) kopiert, 11,7131 s, 89,5 MB/s
ms@merkaba:~


Figures got faster with each measurement. Maybe SSD internal caching?

At leasts that way faster than 22.91 MB/sec ;)

On reads dd used up one core. On writes dd and flush-ecryptfs used up a 
little more than one core, about 110-150%, but not all two cores 
completely.

Kernel is 3.5.

Thanks,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: brtfs on top of dmcrypt with SSD - ssd or nossd + crypt performance?

2012-07-22 Thread Marc MERLIN
On Sun, Jul 22, 2012 at 09:35:10PM +0200, Martin Steigerwald wrote:
 Or use atop to have a complete system overview during the hdparm run. You 
 may want to use its default 10 seconds delay.
 
atop looked totally fine and showed that a long dd took 6% CPU of one core.

 Anyway, hdparm is only a very rough measurement. (Test time 2 / 3 seconds 
 is really short.)
 
 Did you repeat tests three or five times and looked at the deviation?

Yes. I also ran dd with 1GB, and got the exact same number.

Thanks for confirming that it is indeed way faster for you, and you're going
through the filesystem layer which makes it slower compared to my doing dd
against the raw device.

But anyway, while my question about using 'nossd' is on topic here, speed of
raw device falling dramatically once encrypted isn't really relevant here,
so I'll continue on the dm-crypt mailing list.

Thanks
Marc
-- 
A mouse is a device used to point at the xterm you want to type in - A.S.R.
Microsoft is to operating systems 
   what McDonalds is to gourmet cooking
Home page: http://marc.merlins.org/  
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


btrfs-convert complains that fs is mounted even if it isn't

2012-07-22 Thread Jochen

Hi,

I'm trying to run btrfs-convert on a system that has three raid 
partitions (boot/md1, swap/md2 and root/md3). When I boot a rescue 
system from md1, and try to run btrfs-convert /dev/md3, it complains 
that /dev/md3 is already mounted, although it definitely is not. The 
only partition mounted is /dev/md1 because of the rescue system. When I 
replicate the setup in a local VM, booting the rescue system from 
another disk (no /dev/md1 mounted) helps, btrfs-convert runs. However, 
I cannot do this on the server.


Is there anything I can do about this?

Regards,
Jochen


--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: brtfs on top of dmcrypt with SSD - file access 5x slower than spinning disk

2012-07-22 Thread Marc MERLIN
On Sun, Jul 22, 2012 at 01:44:28PM -0700, Marc MERLIN wrote:
 On Sun, Jul 22, 2012 at 09:35:10PM +0200, Martin Steigerwald wrote:
  Or use atop to have a complete system overview during the hdparm run. You 
  may want to use its default 10 seconds delay.
  
 atop looked totally fine and showed that a long dd took 6% CPU of one core.
 
  Anyway, hdparm is only a very rough measurement. (Test time 2 / 3 seconds 
  is really short.)
  
  Did you repeat tests three or five times and looked at the deviation?
 
 Yes. I also ran dd with 1GB, and got the exact same number.

Oh my, the plot thickens.
I'm bringing this back on this list because
1) I've confirmed that I can get 500MB/s unencrypted reads (2GB file)
   with btfrs on the SSD
2) Despite getting a measly 23MB/s reading from /dev/mapper/ssdcrypt,
   once I mount it as a btrfs drive and I a read a file from it, I now
   get 267MB/s
3) Stating a bunch of files on the ssd is very slow (5-6x slower than
   stating the same files from disk). Using noatime did not help.

On an _unencrypted_ partition on the SSD, running du -sh on a directory
with 15K files, takes 23 seconds on unencrypted SSD and 4 secs on
encrypted spinning drive, both with a similar btrfs filesystem, and 
the same kernel (3.4.4).

Since I'm getting some kind of seek problem on the ssd (yes, there are
no real seeks on SSD), it seems that I do have a problem with btrfs
that is at least not related to encryption.

Unencrypted btrfs on SSD:
gandalfthegreat:/mnt/mnt2# mount -o 
compress=lzo,discard,nossd,space_cache,noatime /dev/sda2 /mnt/mnt2
gandalfthegreat:/mnt/mnt2# echo 3  /proc/sys/vm/drop_caches; time du -sh src
514Msrc
real0m22.667s

Encrypted btrfs on spinning drive:
gandalfthegreat:/var/local# echo 3  /proc/sys/vm/drop_caches; time du -sh src
514Msrc
real0m3.881s

I've run this many times and get the same numbers.
I've tried deadline and noop on /dev/sda (the SSD) and du is just as slow.  

I also tried with:
- space_cache and nospace_cache
- ssd and nossd
- noatime didn't seem to help even though I was hopeful on this one.

In all cases, I get:
gandalfthegreat:/mnt/mnt2# echo 3  /proc/sys/vm/drop_caches; time du -sh src
514Msrc
real0m22.537s

22 seconds for 15K files on an SSD while being 5 times slower than a
spinning disk with the same data.
What's going on?




More details below for anyone interested:

The test below shows that I can access the encrypted SSD 10x faster by talking
to it through a btrfs filesystem than doing a dd read from the device node.

So I suppose I could just ignore the dd device node thing, however not really.
Let's take a directory with some source files inside:

Let's start with the hard drive in my laptop (dmcrypt with btrtfs)
gandalfthegreat:/var/local# find src | wc -l
15261
gandalfthegreat:/var/local# echo 3  /proc/sys/vm/drop_caches; time du -sh src
514Msrc
real0m4.068s
So on an encrypted spinning disk, it takes 4 seconds.

On my SSD in the same machine with the same encryption and the same btrfs 
filesystem,
I get 5-6 times slower:
gandalfthegreat:/mnt/btrfs_pool1/var/local# time du -sh src
514Msrc
real0m24.937s

Incidently 5x is also the speed difference between my encrypted HD and 
encrypted SSD
with dd.
Now, why du is 5x slower and dd of a file from the filesystem is 2.5x
faster, I have no idea :-/


See below:

1) drop caches
gandalfthegreat:/mnt/btrfs_pool1/var/local/VirtualBox VMs/w2k_virtual# echo 3  
/proc/sys/vm/drop_caches

gandalfthegreat:/mnt/btrfs_pool1/var/local/VirtualBox VMs/w2k_virtual# dd 
if=w2k-s001.vmdk of=/dev/null
2146631680 bytes (2.1 GB) copied, 8.03898 s, 267 MB/s
- 267MB/s reading from the file through the encrypted filesystem. That's good.

For comparison
gandalfthegreat:/mnt/mnt2# dd if=w2k-s001.vmdk of=/dev/null 
2146631680 bytes (2.1 GB) copied, 4.33393 s, 495 MB/s
- almost 500MB/s reading through another unencrypted filesystem on the same SSD

gandalfthegreat:/mnt/btrfs_pool1/var/local/VirtualBox VMs/w2k_virtual# dd 
if=/dev/mapper/ssdcrypt of=/dev/null bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 45.1234 s, 23.2 MB/s

- 23MB/s reading from the block device that my FS is mounted from. WTF?

gandalfthegreat:/mnt/btrfs_pool1/var/local/VirtualBox VMs/w2k_virtual# echo 3  
/proc/sys/vm/drop_caches; dd if=w2k-s001.vmdk of=test
2146631680 bytes (2.1 GB) copied, 17.9129 s, 120 MB/s

- 120MB/s copying a file from the SSD to itself. That's not bad.

gandalfthegreat:/mnt/btrfs_pool1/var/local/VirtualBox VMs/w2k_virtual# echo 3  
/proc/sys/vm/drop_caches; dd if=test of=/dev/null
2146631680 bytes (2.1 GB) copied, 8.4907 s, 253 MB/s

- reading the new copied file still shows 253MB/s, good.

gandalfthegreat:/mnt/btrfs_pool1/var/local/VirtualBox VMs/w2k_virtual# dd 
if=test of=/dev/null
2146631680 bytes (2.1 GB) copied, 2.11001 s, 1.0 GB/s

- reading without dropping the cache shows 1GB/s 

I'm very lost now, any 

Re: [PATCH v3 1/1] Btrfs: Check INCOMPAT flags on remount and add helper function

2012-07-22 Thread Li Zefan
 diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h

 index a0ee2f8..3a1a700 100644
 --- a/fs/btrfs/ctree.h
 +++ b/fs/btrfs/ctree.h
 @@ -3103,6 +3103,19 @@ void __btrfs_abort_transaction(struct 
 btrfs_trans_handle *trans,
  struct btrfs_root *root, const char *function,
  unsigned int line, int errno);
  
 +static inline void btrfs_chk_lzo_incompat(struct btrfs_root *root)


Isn't btrfs_set_lzo_incompat() is a better name?

 +{
 + struct btrfs_super_block *disk_super;
 + u64 features;
 +
 + disk_super = root-fs_info-super_copy;
 + features = btrfs_super_incompat_flags(disk_super);
 + if (!(features  BTRFS_FEATURE_INCOMPAT_COMPRESS_LZO)) {
 + features |= BTRFS_FEATURE_INCOMPAT_COMPRESS_LZO;
 + btrfs_set_super_incompat_flags(disk_super, features);
 + }
 +}
 +


--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Change small filesystem to normal

2012-07-22 Thread Swâmi Petaramesh

You're painfully right Roman,

A freshly formatted 1 GB BTRFS filesystem on which 81 MB of data has 
been put shows only ~260 MB of free space and reserves something like 2 
x 380 MB of metadata.


This is absolutely ridiculous of BTRFS... :-/

Kind regards.


Le 22/07/2012 17:37, Roman Mamedov a écrit :

On Sun, 22 Jul 2012 17:06:24 +0200
Swâmi Petaramesh sw...@petaramesh.org wrote:


Hi,

I've created a small BTRFS filesystem, where metadata+data are mixed
(and metadata are not DUP'ed).

Then I've enlarged the FS to 1 GB ; now I'd like to make it normal
with separate data and metadata, and DUP'ed metadata.

Considering the metadata overallocation bug [1] is still not fixed even in the
latest kernels and no one seems to care all that much, I would not recommend
doing that.

Personally I now use a mixed filesystem on a 1TB disk without any problems,
and do not think there's anything wrong with mixed. In fact there's been
some talk of moving to the mixed mode allocation to be used by default, and
maybe even removing support for the split mode: see [2].

[1] http://comments.gmane.org/gmane.comp.file-systems.btrfs/17848

[2] http://kerneltrap.org/mailarchive/linux-btrfs/2010/10/29/6885925




--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Change small filesystem to normal

2012-07-22 Thread cwillu
On Sun, Jul 22, 2012 at 10:55 PM, Swâmi Petaramesh sw...@petaramesh.org wrote:
 You're painfully right Roman,

 A freshly formatted 1 GB BTRFS filesystem on which 81 MB of data has been
 put shows only ~260 MB of free space and reserves something like 2 x 380 MB
 of metadata.

 This is absolutely ridiculous of BTRFS... :-/

That's an artifact of the small size of that filesystem and the
default size of allocations, which is why mixed mode exists.

The metadata allocation is about 4% on most filesystems: I see 38gb of
allocated but unused metadata space on a 900gb fs and 70gb on a 1.7tb
fs, and the referenced threads reports 170gb on what appears to be a
4tb fs; while not ideal, it's not remotely as bad as the 25% overhead
of the minimum 256mb*2 metadata allocation on a small 1gb fs*.  The
behaviour of a small filesystem simply isn't the same as the behaviour
of a large filesystem.

* Note that 1gb is still considered a very rather btrfs filesystem,
for which mixed mode is recommended!
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Change small filesystem to normal

2012-07-22 Thread cwillu
 * Note that 1gb is still considered a very rather btrfs filesystem,
 for which mixed mode is recommended!

Deleted the wrong word:  a rather small btrfs filesystem is what I intended.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html