[PATCH] btrfs: Add a new mount option to grow the FS to the limit of the device

2010-08-02 Thread Donggeun Kim
In some cases, resizing a file system to the maximum device size is required.
When flashing a file system image to a block device,
the file system does not fit into the block device's size.
Currently, executing 'btrfsctl' application is the only way
to grow the file system to the limit of the device.
If the mount option which alters the device size of a file system
to the limit of the device is supported,
it can be useful regardless of the existence of 'btrfsctl' program.
This patch allows the file system to grow to the maximum size of the device
on mount time.
The new mount option name is 'maxsize'.

Thank you.

Signed-off-by: Donggeun Kim 
---
 fs/btrfs/ctree.h   |1 +
 fs/btrfs/disk-io.c |   30 ++
 fs/btrfs/super.c   |8 +++-
 3 files changed, 38 insertions(+), 1 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index e9bf864..ee71964 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -1192,6 +1192,7 @@ struct btrfs_root {
 #define BTRFS_MOUNT_NOSSD  (1 << 9)
 #define BTRFS_MOUNT_DISCARD(1 << 10)
 #define BTRFS_MOUNT_FORCE_COMPRESS  (1 << 11)
+#define BTRFS_MOUNT_MAXSIZE(1 << 12)
 
 #define btrfs_clear_opt(o, opt)((o) &= ~BTRFS_MOUNT_##opt)
 #define btrfs_set_opt(o, opt)  ((o) |= BTRFS_MOUNT_##opt)
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 34f7c37..a1abf7c 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -1533,6 +1533,9 @@ struct btrfs_root *open_ctree(struct super_block *sb,
u32 stripesize;
u64 generation;
u64 features;
+   u64 dev_max_size;
+   u64 dev_old_size;
+   u64 devid;
struct btrfs_key location;
struct buffer_head *bh;
struct btrfs_root *extent_root = kzalloc(sizeof(struct btrfs_root),
@@ -1554,6 +1557,9 @@ struct btrfs_root *open_ctree(struct super_block *sb,
 
struct btrfs_super_block *disk_super;
 
+   struct btrfs_trans_handle *trans;
+   struct btrfs_device *device;
+
if (!extent_root || !tree_root || !fs_info ||
!chunk_root || !dev_root || !csum_root) {
err = -ENOMEM;
@@ -1928,6 +1934,30 @@ struct btrfs_root *open_ctree(struct super_block *sb,
btrfs_set_opt(fs_info->mount_opt, SSD);
}
 
+   if (btrfs_test_opt(tree_root, MAXSIZE)) {
+   devid = fs_devices->latest_devid;
+   device = btrfs_find_device(tree_root, devid, NULL, NULL);
+   if (!device) {
+   printk(KERN_WARNING "resizer unable to "
+   "find device %llu\n",
+   (unsigned long long)devid);
+   goto fail_trans_kthread;
+   }
+   dev_max_size = device->bdev->bd_inode->i_size;
+   dev_old_size = device->total_bytes;
+   if (dev_max_size > dev_old_size) {
+   trans = btrfs_start_transaction(tree_root, 0);
+   ret = btrfs_grow_device(trans, device, dev_max_size);
+   if (ret)
+   printk(KERN_INFO "unable to resize for %s\n",
+   device->name);
+   else
+   printk(KERN_INFO "new size for %s is %llu\n",
+   device->name, dev_max_size);
+   btrfs_commit_transaction(trans, tree_root);
+   }
+   }
+
if (btrfs_super_log_root(disk_super) != 0) {
u64 bytenr = btrfs_super_log_root(disk_super);
 
diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
index 859ddaa..d6b8cf2 100644
--- a/fs/btrfs/super.c
+++ b/fs/btrfs/super.c
@@ -68,7 +68,7 @@ enum {
Opt_nodatacow, Opt_max_inline, Opt_alloc_start, Opt_nobarrier, Opt_ssd,
Opt_nossd, Opt_ssd_spread, Opt_thread_pool, Opt_noacl, Opt_compress,
Opt_compress_force, Opt_notreelog, Opt_ratio, Opt_flushoncommit,
-   Opt_discard, Opt_err,
+   Opt_discard, Opt_maxsize, Opt_err,
 };
 
 static match_table_t tokens = {
@@ -92,6 +92,7 @@ static match_table_t tokens = {
{Opt_flushoncommit, "flushoncommit"},
{Opt_ratio, "metadata_ratio=%d"},
{Opt_discard, "discard"},
+   {Opt_maxsize, "maxsize"},
{Opt_err, NULL},
 };
 
@@ -235,6 +236,9 @@ int btrfs_parse_options(struct btrfs_root *root, char 
*options)
case Opt_discard:
btrfs_set_opt(info->mount_opt, DISCARD);
break;
+   case Opt_maxsize:
+   btrfs_set_opt(info->mount_opt, MAXSIZE);
+   break;
case Opt_err:
printk(KERN_INFO "btrfs: unrecognized mount option "
   "'%s'\n", p);
@@ -541,6 +545,8 @@ static int btrfs_show_options(struct seq_file *seq, struct 
vfsmount *vfs)
seq_puts(seq,

Re: synchronous removal?

2010-08-02 Thread Leonidas Spyropoulos
I think a cron job checking the output of df could do that.
The shell script will check if there is enough space to create a snapshot
otherwise remove a snapshot.

How about that?

On Sun, Aug 1, 2010 at 10:11 PM, K. Richard Pixley  wrote:
>  I have an application where I want to snapshot, then do something, and
> based on the result, snapshot either the result or the previous state and
> then repeat.
>
> So far, so good.  But eventually my disk fills and I want to remove the
> oldest snapshots, as many as I need to in order to make room enough for the
> next cycle.
>
> I notice that when I remove old snapshots and delete old directories, the
> free space on my disk, (according to df), doesn't rise immediately.  But
> instead, I see an active btrfs_cleaner for a while and my free space rises
> while it runs.  I'm presuming that the removed files and snapshots aren't
> fully reclaimed immediately but rather wait for something akin to a garbage
> collection much the way modern berkeley file systems do.
>
> How can I either:
>
> a) wait for the cleaner to digest the free space
> b) determine that the cleaner has digested all available free space for now,
> (if not I can sleep for a while)
> c) synchronously force the cleaner to digest available free space
> d) something else I haven't thought of yet
>
> Basically, I want to check to see if there's enough space available.  If
> not, I want to remove some things, (including at least one snapshot), wait
> for the cleaner to digest, and then start over with the checking to see if
> there's enough space available and loop until I've removed enough things
> that there is enough space available.  How can I do that on a btrfs file
> system?
>
> --rich
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
Caution: breathing may be hazardous to your health.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Number of hard links limit

2010-08-02 Thread Sami Liedes
Hi,

There's been discussion before on this list on the very small number
of hard links supported by btrfs.[1][2] In those threads, an often
asked question has been if there's a real world use case the limit
breaks. Also it has been pointed out that a fix for this would need a
disk format change.

As discussed in bug #15762 [3], there are certainly real-world use
cases this limitation breaks. I don't usually like to bring up my pet
bugs on mailing lists, and I'm sorry for doing it here, but if it
indeed needs a disk format change, I think this should be considered
before the format is set in stone. I won't personally lose my sleep if
this is not fixed - I can use other filesystems for backuppc and other
similar systems, although I'd be disappointed to see a production
backup system unexpectedly fail because of this - just that I think
it's better to think about this before things are set in stone.

I'd venture to guess that if I have hit this limit with the very small
amount of btrfs use I've done, thousands of others are going to hit it
when btrfs is the default filesystem of distributions.

Sami


[1] http://comments.gmane.org/gmane.comp.file-systems.btrfs/4589
[2] http://thread.gmane.org/gmane.comp.file-systems.btrfs/3427
[3] https://bugzilla.kernel.org/show_bug.cgi?id=15762


signature.asc
Description: Digital signature


Re: Number of hard links limit

2010-08-02 Thread Xavier Nicollet
Le 02 août 2010 à 14:40, Sami Liedes a écrit:
> [BTRFS supports only 256 hard-links per directory ...] but if it
> indeed needs a disk format change, I think this should be considered
> before the format is set in stone. I won't personally lose my sleep if
> this is not fixed - I can use other filesystems for backuppc and other
> similar systems, 

Wouldn't it be even better to actually patch BackupPC to handle btrfs
snapshots and COW (bcp) ?

-- 
Xavier Nicollet
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: synchronous removal?

2010-08-02 Thread Bruce Guenter
On Sun, Aug 01, 2010 at 02:11:15PM -0700, K. Richard Pixley wrote:
> I notice that when I remove old snapshots and delete old directories, 
> the free space on my disk, (according to df), doesn't rise immediately.  

> Basically, I want to check to see if there's enough space available.  If 
> not, I want to remove some things, (including at least one snapshot), 
> wait for the cleaner to digest, and then start over with the checking to 
> see if there's enough space available and loop until I've removed enough 
> things that there is enough space available.  How can I do that on a 
> btrfs file system?

I asked a similar question a while back, and the short answer is that
you can't, short of unmounting and remounting the filesystem.  The
indication was made that writing a new ioctl to wait for all background
activity wouldn't be too hard, but I don't recall seeing it in any
recent patches.

See this thread:
http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg04872.html

-- 
Bruce Guenter http://untroubled.org/


pgpzypU82plrM.pgp
Description: PGP signature


Re: Are enormous extents harmful?

2010-08-02 Thread Chris Mason
On Sun, Aug 01, 2010 at 02:28:33PM +0100, Greg Kochanski wrote:
> I created a btrfs file system with a single 420 megabyte file
> in it. And, when I look at the file system with btrfs-debug,
> I see gigantic extents, as large as 99 megabytes:
> 
> > $ sudo btrfs-debug-tree /dev/sdb | grep extent
> > ...

btrfs-debug-tree is a great way to look at and learn more about the
btrfs disk layout.

> > dev extent chunk_tree 3
> > dev extent chunk_tree 3
> > extent data disk byte 80084992 nr 99958784
> > extent data offset 0 nr 99958784 ram 99958784
> > extent compression 0
> > extent data disk byte 181534720 nr 74969088
> > extent data offset 0 nr 74969088 ram 74969088
> > ...
> 
> 
> This may be too much of a good thing.   From the point
> of view of efficient reading, large extents are good, because
> they minimize seeks in sequential reads.
> But there will be diminishing returns when
> the extent gets bigger than the size of a physical disk cylinder.

Even for small extents, (4k blocks) you can do large IOs.  ext23 both do
this and generally are able to do ios much larger than a physical
cylinder.

This is another way of saying the size of the extent doesn't have to
impact fragmentation or seeking.  We can have 512 byte blocks with zero
seeks if we lay them out correctly.

> 
> For instance, modern disks have a data transfer rate of (about) 200MB/s,
> so adding one extra seek (about 8ms) in the middle of a
> 200MB extent can't possibly slow things down by more than 1%.
> (And, that's the worst-possible case.)
> 
> But, large extents (I think) also have costs.   For instance, if you are
> writing a byte into the middle of an extent, doesn't Btrfs have to copy
> the entire extent?If so, and if you have a 99MB extent, the cost
> of that write operation will be *huge*.

Btrfs doesn't do COW on the whole extent, just the portion you are
changing. 

The real benefit of extents for traditional filesystems is that you just
don't need as much metadata to describe the space used on disk by a
given file.  For huge files this matters quite a lot, just look at how
long it takes to delete a 1TB sparse file on ext3.

In btrfs the size of the extent matters even more.  For COW and
snapshots, we have more tracking per-extent than most filesystems do.
Keeping larger extents allows us to have less tracking and generally
makes things much more efficient.

> 
> Likewise, if you have compressed extents, and you want to read one
> byte near the end of the extent, Btrfs needs to uncompress the
> entire extent.Under some circumstances, you might have to
> decompress 99MB to read one short block of data.   (Admittedly,
> cacheing will make this worst-case scenario less common, but
> it will still be there sometimes.)

The size of a compressed extent is limited to 256k, both on disk and in
ram.  We try to make sure uncompressing the extent won't make the
machine fall over.

So, the issues you talk about do all exist.  We try to manage the
compromises around extent size and still keep the benefits of large
extents.  There was a mount option to limit the max extent size in the
past, but it was not used very often and made the enospc code
dramatically more complex.  It was removed to cut down on enospc
problems.

(more extents mean more metadata which means more space per operation)

-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: synchronous removal?

2010-08-02 Thread K. Richard Pixley
How would you determine whether to remove another snapshot or to wait for 
previously removed space to be digested?

If you simply remove snapshots then you'll end up removing all if your 
snapshots and df will still say you don't have enough space. Btdt. What I'm 
doing right now is removing a snapshot and immediately sleeping for 60 seconds 
in hopes that it will be digested in that time. Judicious use of df and log 
lines tell me that some of the space is digested in that time but I have no 
way, (that I know of), to determine whether all of it has been.

--rich

On Aug 2, 2010, at 4:35, Leonidas Spyropoulos  wrote:

> I think a cron job checking the output of df could do that.
> The shell script will check if there is enough space to create a snapshot
> otherwise remove a snapshot.
> 
> How about that?
> 
> On Sun, Aug 1, 2010 at 10:11 PM, K. Richard Pixley  wrote:
>>  I have an application where I want to snapshot, then do something, and
>> based on the result, snapshot either the result or the previous state and
>> then repeat.
>> 
>> So far, so good.  But eventually my disk fills and I want to remove the
>> oldest snapshots, as many as I need to in order to make room enough for the
>> next cycle.
>> 
>> I notice that when I remove old snapshots and delete old directories, the
>> free space on my disk, (according to df), doesn't rise immediately.  But
>> instead, I see an active btrfs_cleaner for a while and my free space rises
>> while it runs.  I'm presuming that the removed files and snapshots aren't
>> fully reclaimed immediately but rather wait for something akin to a garbage
>> collection much the way modern berkeley file systems do.
>> 
>> How can I either:
>> 
>> a) wait for the cleaner to digest the free space
>> b) determine that the cleaner has digested all available free space for now,
>> (if not I can sleep for a while)
>> c) synchronously force the cleaner to digest available free space
>> d) something else I haven't thought of yet
>> 
>> Basically, I want to check to see if there's enough space available.  If
>> not, I want to remove some things, (including at least one snapshot), wait
>> for the cleaner to digest, and then start over with the checking to see if
>> there's enough space available and loop until I've removed enough things
>> that there is enough space available.  How can I do that on a btrfs file
>> system?
>> 
>> --rich
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> 
> 
> 
> 
> -- 
> Caution: breathing may be hazardous to your health.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: synchronous removal?

2010-08-02 Thread K. Richard Pixley
On Aug 2, 2010, at 8:02, Bruce Guenter  wrote:

> On Sun, Aug 01, 2010 at 02:11:15PM -0700, K. Richard Pixley wrote:
>> I notice that when I remove old snapshots and delete old directories, 
>> the free space on my disk, (according to df), doesn't rise immediately.  
> 
>> Basically, I want to check to see if there's enough space available.  If 
>> not, I want to remove some things, (including at least one snapshot), 
>> wait for the cleaner to digest, and then start over with the checking to 
>> see if there's enough space available and loop until I've removed enough 
>> things that there is enough space available.  How can I do that on a 
>> btrfs file system?
> 
> I asked a similar question a while back, and the short answer is that
> you can't, short of unmounting and remounting the filesystem.  The
> indication was made that writing a new ioctl to wait for all background
> activity wouldn't be too hard, but I don't recall seeing it in any
> recent patches.

That's interesting. Thank you. 

Would anyone like to make a bid on doing this work?  That is, is there anyone 
with the knowledge and code familiarity available for me to commission this 
work?

--rich
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Number of hard links limit

2010-08-02 Thread Anthony Roberts
On Mon, 2 Aug 2010 15:05:56 +0200, Xavier Nicollet 
wrote:
> Le 02 août 2010 à 14:40, Sami Liedes a écrit:
>> [BTRFS supports only 256 hard-links per directory ...] but if it
>> indeed needs a disk format change, I think this should be considered
>> before the format is set in stone. I won't personally lose my sleep if
>> this is not fixed - I can use other filesystems for backuppc and other
>> similar systems, 
> 
> Wouldn't it be even better to actually patch BackupPC to handle btrfs
> snapshots and COW (bcp) ?

That's not the only application impacted by this.

Also, I think it's unrealistic to expect everyone else to code to
BTRFS-specific ioctls when there's other filesystems and other platforms to
worry about. It would also be nice if we could tar/rsync/whatever between
BTRFS and something else like ext3 or some other OS entirely, without
archiving tools either blowing up or requiring application-specific
knowledge of how to convert dedups to hard links and back.

Also, I believe it's not strictly 256 links, it's dependent on the length
of the names.

I recall Chris posting something about being able to fix this without a
format change, though it wasn't a priority yet.

-Anthony
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: wanted X found X-1 but you got X-2

2010-08-02 Thread Adi
hi Miao,

now i get that error (on btrfsck):

parent transid verify failed on 5403553792 wanted 103380 found 103378
btrfsck: disk-io.c:739: open_ctree_fd: Assertion `!(!tree_root->node)' failed.

if i want to mount it, it shows the same error in dmesg, but neither one hangs 
the terminal, as it did without the patches.
do you need more debug? i didn't know how to activate it on gentoo, what flags 
do i need to do so?
hm, and the last patch didn't show any output on "patch -p1"... is that normal?

bye, adi

 Original-Nachricht 
> Datum: Mon, 02 Aug 2010 09:04:27 +0800
> Von: Miao Xie 
> An: Adi 
> CC: liubo2...@cn.fujitsu.com, linux-btrfs@vger.kernel.org
> Betreff: Re: wanted X found X-1 but you got X-2

> Hi, Adi
> 
> On Sat, 31 Jul 2010 18:34:24 +0200, Adi wrote:
> > ok, i have tried the 2.6.35-rc6 kernel, as TiCPU on #btrfs suggested,
> but i also got an error. i don't think it has something to do with kcrypt, as
> i can mount another partition without any problems.
> [snip]
> > [ cut here ]
> > kernel BUG at fs/btrfs/async-thread.c:603!
> > invalid opcode:  [#1] PREEMPT
> [snip]
> It seems this problem is the same as the one I have fixed.
> Could you try to applied the following patches and test again?
>http://lkml.org/lkml/2010/7/29/86
>http://lkml.org/lkml/2010/7/29/84
>http://lkml.org/lkml/2010/7/29/82
> 
> Thanks
> Miao

-- 
Neu: GMX De-Mail - Einfach wie E-Mail, sicher wie ein Brief!  
Jetzt De-Mail-Adresse reservieren: http://portal.gmx.net/de/go/demail
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Number of hard links limit

2010-08-02 Thread Michael Niederle
> Also, I believe it's not strictly 256 links, it's dependent on the length
> of the names.
> 
> I recall Chris posting something about being able to fix this without a
> format change, though it wasn't a priority yet.

As to my knowledge the limit is 64KB for all names of a single file and due to
Chris it will take a format change to fix this.

I ran into the limit some month ago while installing some Gentoo packages.

Greetings, Michael
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Number of hard links limit

2010-08-02 Thread Roberto Ragusa
Michael Niederle wrote:
>> Also, I believe it's not strictly 256 links, it's dependent on the length
>> of the names.
>>
>> I recall Chris posting something about being able to fix this without a
>> format change, though it wasn't a priority yet.
> 
> As to my knowledge the limit is 64KB for all names of a single file and due to
> Chris it will take a format change to fix this.
> 
> I ran into the limit some month ago while installing some Gentoo packages.

That means it would not work for my backup server.
At 4 backups per day, failure for filenames with 45 characters after just
one year.

-- 
   Roberto Ragusamail at robertoragusa.it
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Number of hard links limit

2010-08-02 Thread Oystein Viggen
* [Roberto Ragusa] 

> That means it would not work for my backup server.
> At 4 backups per day, failure for filenames with 45 characters after just
> one year.

IIRC, the limit on hard links is per directory.  That is, if you put
each hard link into its own directory, there's basically no limit to the
amount of hard links you can make to one file.

Thus, many generations of backup with BackupPC shouldn't trigger the
problem, as each generation is stored in its own directory tree.  The
problem appears when your source data has many identical files in the
same directory, since these would be deduplicated as hard links to the
same file in the backup pool.

Øystein
-- 
This message was generated by a horde of attack elephants armed with PRNGs.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


why does btrfs pronounce "butter-eff-ess"?

2010-08-02 Thread Wang Shaoyan
As far as I know, btrfs comes from "btree file system", but why does
btrfs pronounce "butter-eff-ess"?

-- 
Wang Shaoyan
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: why does btrfs pronounce "butter-eff-ess"?

2010-08-02 Thread Chris Samuel
On 03/08/10 13:27, Wang Shaoyan wrote:

> As far as I know, btrfs comes from "btree file system", but why does
> btrfs pronounce "butter-eff-ess"?

My guess is that it is a pun on "better-fs", btr being a possible
contraction of better.

English is a funny old language..

-- 
 Chris Samuel  :  http://www.csamuel.org/  :  Melbourne, VIC
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: why does btrfs pronounce "butter-eff-ess"?

2010-08-02 Thread Aaron Toponce
On Tue, Aug 03, 2010 at 11:27:32AM +0800, Wang Shaoyan wrote:
> As far as I know, btrfs comes from "btree file system", but why does
> btrfs pronounce "butter-eff-ess"?

The same reason we pronounce "ext3" as "eks tee three" rather than "eee
eks tee three"- laziness. "btree file system" is two extra syllables of
inefficiency over "butter fs".

-- 
. O .   O . O   . . O   O . .   . O .
. . O   . O O   O . O   . O O   . . O
O O O   . O .   . O O   O O .   O O O


signature.asc
Description: Digital signature


Re: why does btrfs pronounce "butter-eff-ess"?

2010-08-02 Thread Tracy Reed
On Tue, Aug 03, 2010 at 11:27:32AM +0800, Wang Shaoyan spake thusly:
> As far as I know, btrfs comes from "btree file system", but why does
> btrfs pronounce "butter-eff-ess"?

Because fat and greasy is attractive and encourages adoption of the
filesystem. I'm hoping we can get RMS to be the butter-eff-ess
spokesman.

-- 
Tracy "that's a joke, son" Reed
http://tracyreed.org


pgpiip780isCk.pgp
Description: PGP signature