On Friday 21 of January 2011 20:28:19 Freddie Cash wrote:
> On Sun, Jan 9, 2011 at 10:30 AM, Hugo Mills <hugo-l...@carfax.org.uk> wrote:
> > On Sun, Jan 09, 2011 at 09:59:46AM -0800, Freddie Cash wrote:
> >> Let see if I can match up the terminology and layers a bit:
> >> 
> >> LVM Physical Volume == Btrfs disk == ZFS disk / vdevs
> >> LVM Volume Group == Btrfs "filesystem" == ZFS storage pool
> >> LVM Logical Volume == Btrfs subvolume == ZFS volume
> >> 'normal' filesysm == Btrfs subvolume (when mounted) == ZFS filesystem
> >> 
> >> Does that look about right?
> > 
> >   Kind of. The thing is that the way that btrfs works is massively
> > different to the way that LVM works (and probably massively different
> > to the way that ZFS works, but I don't know much about ZFS, so I can't
> > comment there). I think that trying to think of btrfs in LVM terms is
> > going to lead you to a large number of incorrect conclusions. It's
> > just not a good model to use.
> 
> My biggest issue trying to understand Btrfs is figuring out the layers
> involved.
> 
> With ZFS, it's extremely easy:
> 
> disks --> vdev --> pool --> filesystems
> 
> With LVM, it's fairly easy:
> 
> disks -> volume group --> volumes --> filesystems
> 
> But, Btrfs doesn't make sense to me:
> 
> disks --> filesystem --> sub-volumes???
> 
> So, is Btrfs pooled storage or not?  Do you throw 24 disks into a
> single Btrfs filesystem, and then split that up into separate
> sub-volumes as needed?  From the looks of things, you don't have to
> partition disks or worry about sizes before formatting (if the space
> is available, Btrfs will use it).  But it also looks like you still
> have to manage disks.
> 
> Or, maybe it's just that the initial creation is done via mkfs (as in,
> formatting a partition with a filesystem) that's tripping me up after
> using ZFS for so long (zpool creates the storage pool, manages the
> disks, sets up redundancy levels, etc;  zfs creates filesystems and
> volumes, and sets properties; no newfs/mkfs involved).
> 
> It looks like ZFS, Btrfs, and LVM should work in similar manners, but
> the overloaded terminology (pool, volume, sub-volume, filesystem are
> different in all three) and new terminology that's only in Btrfs is
> confusing.

With btrfs you need to have *a* filesystem, once you have it, you can add and 
remove disks/partitions from it, no need to use 'mkfs.btrfs', just 'btrfs'.

As for managing storage space: you don't. There's one single pool of storage 
that you can't divide. Quota support is also absent. The only thing you can do 
with storage is add more or remove some.

> >> Just curious, why all the new terminology in btrfs for things that
> >> already existed?  And why are old terms overloaded with new meanings?
> >> I don't think I've seen a write-up about that anywhere (or I don't
> >> remember it if I have).
> > 
> >   The main awkward piece of btrfs terminology is the use of "RAID" to
> > describe btrfs's replication strategies. It's not RAID, and thinking
> > of it in RAID terms is causing lots of confusion. Most of the other
> > things in btrfs are, I think, named relatively sanely.
> 
> No, the main awkward piece of btrfs terminology is overloading
> "filesystem" to mean "collection of disks" and creating "sub-volume"
> to mean "filesystem".  At least, that's how it looks from way over
> here.  :)

subvolumes are made to be able to snapshot only part of files residing on a 
filesystem, that's their only feature right now

> 
> >> Perhaps it's time to start looking at separating the btrfs pool
> >> creation tools out of mkfs (or renaming mkfs.btrfs), since you're
> >> really building a a storage pool, and not a filesystem.  It would
> >> prevent a lot of confusion with new users.  It's great that there's a
> >> separate btrfs tool for manipulating btrfs setups, but "mkfs.btrfs" is
> >> just wrong for creating the btrfs setup.
> > 
> >   I think this is the wrong thing to do. I hope my explanation above
> > helps.
> 
> As I understand it, the mkfs.btrfs is used to create the initial
> filesystem across X disks with Y redundancy.  For everthing else
> afterward, the btrfs tool is used to add disks, create snapshots,
> delete snapshots, change redundancy settings, create sub-volumes, etc.
>  Why not just add a "create" option to btrfs and retire mkfs.btrfs
> completely.  Or rework mkfs.btrfs to create sub-volumes of an existing
> btrfs setup?

all linux file systems use mkfs.<fs name>, there's no reason why btrfs 
shouldn't. For creation of FS you use one command, for management you use 
other command. I'd say that's a pretty sane division.

> 
> What would be great is if there was an image that showed the layers in
> Btrfs and how they interacted with the userspace tools.

It would either be
* very complicated (if it included different allocation groups and how they 
  interact) and useless for users 
* very simple (you put one fs on many disks, snapshotable part of FS is called 
  subvolume) and pointless...

> Having a set of graphics that compared the layers in Btrfs with the
> layers in the "normal" Linux disk/filesystem partitioning scheme, and
> the LVM layering, would be best.

btrfs doesn't have layers to compare so it's rather hard to make such graph.

> There's lots of info in the wiki, but no images, ASCII-art, graphics,
> etc.  Trying to picture this mentally is not working.  :)


-- 
Hubert Kario
QBS - Quality Business Software
02-656 Warszawa, ul. Ksawerów 30/85
tel. +48 (22) 646-61-51, 646-74-24
www.qbs.com.pl
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to