Rich Freeman <ri...@gentoo.org> writes:

> On Sun, Jan 11, 2015 at 8:14 AM, lee <l...@yagibdah.de> wrote:
>> Rich Freeman <ri...@gentoo.org> writes:
>>>
>>> Doing backups with dd isn't terribly practical, but it is completely
>>> safe if done correctly.  The LV would need to be the same size or
>>> larger, or else your filesystem will be truncated.
>>
>> Yes, my impression is that it isn't very practical or a good method, and
>> I find it strange that LVM is still lacking some major features.
>
> Generally you do backup at the filesystem layer, not at the volume
> management layer.  LVM just manages a big array of disk blocks.  It
> has no concept of files.

That may require downtime while the idea of taking snapshots and then
backing up the volume is to avoid the downtime.

>>> Just create a small boot partition and give the rest to zfs.  A
>>> partition is a block device, just like a disk.  ZFS doesn't care if it
>>> is managing the entire disk or just a partition.
>>
>> ZFS does care: You cannot export ZFS pools residing on partitions, and
>> apparently ZFS cannot use the disk cache as efficiently when it uses
>> partitions.
>
> Cite?  This seems unlikely.

,---- [ man zpool ]
|            For pools to be portable, you  must  give  the  zpool  command
|            whole  disks,  not  just partitions, so that ZFS can label the
|            disks with portable EFI labels.  Otherwise,  disk  drivers  on
|            platforms  of  different  endianness  will  not  recognize the
|            disks.
`----

You may be able to export them, and then you don't really know what
happens when you try to import them.  I didn't keep a bookmark for the
article that mentioned the disk cache.

When you read about ZFS, you'll find that using the whole disk is
recommended while using partitions is not.

>> Caching in memory is also less efficient because another
>> file system has its own cache.
>
> There is no other filesystem.  ZFS is running on bare metal.  It is
> just pointing to a partition on a drive (an array of blocks) instead
> of the whole drive (an array of blocks).  The kernel does not cache
> partitions differently from drives.

How do you use a /boot partition that doesn't have a file system?

>> On top of that, you have the overhead of
>> software raid for that small partition unless you can dedicate
>> hardware-raided disks for /boot.
>
> Just how often are you reading/writing from your boot partition?  You
> only read from it at boot time, and you only write to it when you
> update your kernel/etc.  There is no requirement for it to be raided
> in any case, though if you have multiple disks that wouldn't hurt.

If you want to accept that the system goes down or has to be brought
down or is unable to boot because the disk you have your /boot partition
on has failed, you may be able to get away with a non-raided /boot
partition.

When you do that, what's the advantage other than saving the software
raid?  You still need to either dedicate a disk to it, or you have to
leave a part of all the other disks unused and cannot use them as a
whole for ZFS because otherwise they will be of different sizes.

>>> This sort of thing was very common before grub2 started supporting
>>> more filesystems.
>>
>> That doesn't mean it's a good setup.  I'm finding it totally
>> undesirable.  Having a separate /boot partition has always been a
>> crutch.
>
> Better not buy an EFI motherboard.  :)

Yes, they are a security hazard and a PITA.  Maybe I can sit it out
until they come up with something better.

>>>> With ZFS at hand, btrfs seems pretty obsolete.
>>>
>>> You do realize that btrfs was created when ZFS was already at hand,
>>> right?  I don't think that ZFS will be likely to make btrfs obsolete
>>> unless it adopts more dynamic desktop-oriented features (like being
>>> able to modify a vdev), and is relicensed to something GPL-compatible.
>>> Unless those happen, it is unlikely that btrfs is going to go away,
>>> unless it is replaced by something different.
>>
>> Let's say it seems /currently/ obsolete.
>
> You seem to have an interesting definition of "obsolete" - something
> which holds potential promise for the future is better described as
> "experimental."

Can you build systems on potential promises for the future?

If the resources it takes to develop btrfs would be put towards
improving ZFS, or the other way round, wouldn't that be more efficient?
We might even have a better solution available now.  Of course, it's not
a good idea to remove variety, so it's a dilemma.  But are the features
provided or intended to be provided and the problems both btrfs and ZFS
are trying to solve so much different that that each of them needs to
re-invent the wheel?

>> Solutions are needed /now/, not in about 10 years when btrfs might be
>> ready.
>>
>
> Well, feel free to create one.  Nobody is stopping anybody from using
> zfs, but unless it is either relicensed by Oracle or the
> kernel/grub/etc is relicensed by everybody else you're unlikely to see
> it become a mainstream solution.  That seems to be the biggest barrier
> to adoption, though it would be nice for small installations if vdevs
> were more dynamic.
>
> By all means use it if that is your preference.  A license may seem
> like a small thing, but entire desktop environments have been built as
> a result of them.  When a mainstream linux distro can't put ZFS
> support on their installation CD due to licensing compatibility it
> makes it pretty impractical to use it for your default filesystem.
>
> I'd love to see the bugs worked out of btrfs faster, but for what I've
> paid for it so far I'd say I'm getting good value for my $0.  It is
> FOSS - it gets done when those contributing to it (whether paid or
> not) are done.  The ones who are paying for it get to decide for
> themselves if it meets their needs, which could be quite different
> from yours.

What are these licensing issues good for other than preventing
solutions?

> I'd actually be interested in a comparison of the underlying btrfs vs
> zfs designs.  I'm not talking about implementation (bugs/etc), but the
> fundamental designs.  What features are possible to add to one which
> are impossible to add to the other, what performance limitations will
> the one always suffer in comparison to the other, etc?  All the
> comparisons I've seen just compare the implementations, which is
> useful if you're trying to decide what to install /right now/ but less
> so if you're trying to understand the likely future of either.

The future is not predictable, and you can only install something
/now/.  What you will be able to install in the future and what your
requirements are in the future isn't very relevant.

In the future, you'll probably be basically in the same situation you're
in now, i. e. you'll find the file systems widely used not exactly up to
the requirements and "experimental" file systems that may make you ask
what to install /then/ and what their future might be.

So what's the benefit you'd get from the comparison you're interested
in?


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.

Reply via email to