On Mon, 05 Oct 2015 11:43:18 +0300
Erkki Seppala wrote:
> Lionel Bouton writes:
>
> > 1/ AFAIK the kernel md RAID1 code behaves the same (last time I checked
> > you need 2 processes to read from 2 devices at once) and I've never seen
> >
Rich Freeman posted on Sun, 04 Oct 2015 08:21:53 -0400 as excerpted:
> On Sun, Oct 4, 2015 at 8:03 AM, Lionel Bouton
> wrote:
>>
>> This focus on single reader RAID1 performance surprises me.
>>
>> 1/ AFAIK the kernel md RAID1 code behaves the same (last time I
Lionel Bouton writes:
> 1/ AFAIK the kernel md RAID1 code behaves the same (last time I checked
> you need 2 processes to read from 2 devices at once) and I've never seen
> anyone arguing that the current md code is unstable.
This indeed seems to be the case on
On 2015-10-05 07:16, Lionel Bouton wrote:
Hi,
Le 04/10/2015 14:03, Lionel Bouton a écrit :
[...]
This focus on single reader RAID1 performance surprises me.
1/ AFAIK the kernel md RAID1 code behaves the same (last time I checked
you need 2 processes to read from 2 devices at once) and I've
On Mon, Oct 5, 2015 at 7:16 AM, Lionel Bouton
wrote:
> According to the bad performance -> unstable logic, md would then be the
> less stable RAID1 implementation which doesn't make sense to me.
>
The argument wasn't that bad performance meant that something was
Hi,
Le 04/10/2015 14:03, Lionel Bouton a écrit :
> [...]
> This focus on single reader RAID1 performance surprises me.
>
> 1/ AFAIK the kernel md RAID1 code behaves the same (last time I checked
> you need 2 processes to read from 2 devices at once) and I've never seen
> anyone arguing that the
On Mon, 5 Oct 2015 13:16:03 +0200
Lionel Bouton wrote:
> To better illustrate my point.
>
> According to Phoronix tests, BTRFS RAID-1 is even faster than md RAID1
> most of the time.
>
> http://www.phoronix.com/scan.php?page=article=btrfs_raid_mdadm=1
>
> The
On 2015-10-05 10:04, Duncan wrote:
On Mon, 5 Oct 2015 13:16:03 +0200
Lionel Bouton wrote:
To better illustrate my point.
According to Phoronix tests, BTRFS RAID-1 is even faster than md RAID1
most of the time.
On Sun, Oct 4, 2015 at 8:03 AM, Lionel Bouton
wrote:
>
> This focus on single reader RAID1 performance surprises me.
>
> 1/ AFAIK the kernel md RAID1 code behaves the same (last time I checked
> you need 2 processes to read from 2 devices at once) and I've never
Hi,
Le 04/10/2015 04:09, Duncan a écrit :
> Russell Coker posted on Sat, 03 Oct 2015 18:32:17 +1000 as excerpted:
>
>> Last time I checked a BTRFS RAID-1 filesystem would assign each process
>> to read from one disk based on it's PID. Every RAID-1 implementation
>> that has any sort of
Russell Coker posted on Sat, 03 Oct 2015 18:32:17 +1000 as excerpted:
> Last time I checked a BTRFS RAID-1 filesystem would assign each process
> to read from one disk based on it's PID. Every RAID-1 implementation
> that has any sort of performance optimisation will allow a single
> process
On Fri, 2 Oct 2015 10:07:24 PM Austin S Hemmelgarn wrote:
> > ARC presumably worked better than the other Solaris caching options. It
> > was ported to Linux with zfsonlinux because that was the easy way of
> > doing it.
>
> Actually, I think part of that was also the fact that ZFS is a COW
>
On 2015-10-02 00:21, Russell Coker wrote:
On Sat, 26 Sep 2015 12:20:41 AM Austin S Hemmelgarn wrote:
FYI:
Linux pagecache use LRU cache algo, and in general case it's working good
enough
I'd argue that 'general usage' should be better defined in this
statement. Obviously, ZFS's ARC
On Sat, 26 Sep 2015 12:20:41 AM Austin S Hemmelgarn wrote:
> > FYI:
> > Linux pagecache use LRU cache algo, and in general case it's working good
> > enough
>
> I'd argue that 'general usage' should be better defined in this
> statement. Obviously, ZFS's ARC implementation provides better
>
Hi,
thank you all for your helpful comments.
From what I've read, I forged the following guidelines (for myself;
ymmv):
- Use btrfs for generic data storage on spinning disks and for
everything on ssds.
- Use zfs for spinning disks that may be used for cow-unfriendly
workloads, like vm
On Sat, Sep 19, 2015 at 9:26 PM, Jim Salter wrote:
>
> ZFS, by contrast, works like absolute gangbusters for KVM image storage.
I'd be interested in what allows ZFS to handle KVM image storage well,
and whether this could be implemented in btrfs. I'd think that the
fragmentation
I suspect that the answer most likely boils down to "the ARC".
ZFS uses an Adaptive Replacement Cache instead of a standard FIFO, which
keeps blocks in cache longer if they have been accessed in cache. This
means much higher cache hit rates, which also means minimizing the
effects of
On 2015-09-25 08:48, Rich Freeman wrote:
On Sat, Sep 19, 2015 at 9:26 PM, Jim Salter wrote:
ZFS, by contrast, works like absolute gangbusters for KVM image storage.
I'd be interested in what allows ZFS to handle KVM image storage well,
and whether this could be implemented
Pretty much bog-standard, as ZFS goes. Nothing different than what's
recommended for any generic ZFS use.
* set blocksize to match hardware blocksize - 4K drives get 4K
blocksize, 8K drives get 8K blocksize (Samsung SSDs)
* LZO compression is a win. But it's not like anything sucks without
On 2015-09-25 09:12, Jim Salter wrote:
Pretty much bog-standard, as ZFS goes. Nothing different than what's
recommended for any generic ZFS use.
* set blocksize to match hardware blocksize - 4K drives get 4K
blocksize, 8K drives get 8K blocksize (Samsung SSDs)
* LZO compression is a win. But
2015-09-25 16:52 GMT+03:00 Jim Salter :
> Pretty much bog-standard, as ZFS goes. Nothing different than what's
> recommended for any generic ZFS use.
>
> * set blocksize to match hardware blocksize - 4K drives get 4K blocksize, 8K
> drives get 8K blocksize (Samsung SSDs)
> * LZO
On 2015-09-25 10:02, Timofey Titovets wrote:
2015-09-25 16:52 GMT+03:00 Jim Salter :
Pretty much bog-standard, as ZFS goes. Nothing different than what's
recommended for any generic ZFS use.
* set blocksize to match hardware blocksize - 4K drives get 4K blocksize, 8K
drives
On Sat, 19 Sep 2015 12:13:29 AM Austin S Hemmelgarn wrote:
> The other option (which for some reason I almost never see anyone
> suggest), is to expose 2 disks to the guest (ideally stored on different
> filesystems), and do BTRFS raid1 on top of that. In general, this is
> what I do (except I
On Fri, 18 Sep 2015 12:00:15 PM Duncan wrote:
> The caveat here is that if the VM/DB is active during the backups (btrfs
> send/receive or other), it'll still COW1 any writes during the existence
> of the btrfs snapshot. If the backup can be scheduled during VM/DB
> downtime or at least when
I can't recommend btrfs+KVM, and I speak from experience.
Performance will be fantastic... except when it's completely abysmal.
When I tried it, I also ended up with a completely borked (btrfs-raid1)
filesystem that would only mount read-only and read at hideously reduced
speeds after about
On 2015-09-18 04:22, Duncan wrote:
one way or another, you're going to
have to write two things, one a checksum of the other, and if they are
in-
place-overwrites, while the race can be narrowed, there's always going
to
be a point at which either one or the other will have been written,
while
On 2015-09-17 14:35, Chris Murphy wrote:
On Thu, Sep 17, 2015 at 11:56 AM, Gert Menke wrote:
Hi,
thank you for your answers!
So it seems there are several suboptimal alternatives here...
MD+LVM is very close to what I want, but md has no way to cope with silent
data
On Thu, Sep 17, 2015 at 11:56 AM, Gert Menke wrote:
> Hi,
>
> thank you for your answers!
>
> So it seems there are several suboptimal alternatives here...
>
> MD+LVM is very close to what I want, but md has no way to cope with silent
> data corruption. So if I'd want to use a
Hi,
thank you for your answers!
So it seems there are several suboptimal alternatives here...
MD+LVM is very close to what I want, but md has no way to cope with
silent data corruption. So if I'd want to use a guest filesystem that
has no checksums either, I'm out of luck.
I'm honestly a bit
On 17 September 2015 at 18:56, Gert Menke wrote:
> MD+LVM is very close to what I want, but md has no way to cope with silent
> data corruption. So if I'd want to use a guest filesystem that has no
> checksums either, I'm out of luck.
> I'm honestly a bit confused here - isn't
On Thu, Sep 17, 2015 at 07:56:08PM +0200, Gert Menke wrote:
> Hi,
>
> thank you for your answers!
>
> So it seems there are several suboptimal alternatives here...
>
> MD+LVM is very close to what I want, but md has no way to cope with
> silent data corruption. So if I'd want to use a guest
On 17.09.2015 at 21:43, Hugo Mills wrote:
On Thu, Sep 17, 2015 at 07:56:08PM +0200, Gert Menke wrote:
BTRFS looks really nice feature-wise, but is not (yet) optimized for
my use-case I guess. Disabling COW would certainly help, but I don't
want to lose the data checksums. Is
On 17.09.2015 at 20:35, Chris Murphy wrote:
You can use Btrfs in the guest to get at least notification of SDC.
Yes, but I'd rather not depend on all potential guest OSes having btrfs
or something similar.
Another way is to put a conventional fs image on e.g. GlusterFS with
checksumming
Hugo Mills posted on Thu, 17 Sep 2015 19:43:14 + as excerpted:
>> Is nodatacowbutkeepdatachecksums a feature that might turn up
>> in the future?
>
> No. If you try doing that particular combination of features, you
> end up with a filesystem that can be inconsistent: there's a race
>
Chris Murphy posted on Thu, 17 Sep 2015 12:35:41 -0600 as excerpted:
> You'd use Btrfs snapshots to create a subvolume for doing backups of
> the images, and then get rid of the Btrfs snapshot.
The caveat here is that if the VM/DB is active during the backups (btrfs
send/receive or other),
On Thu, Sep 17, 2015 at 07:56:08PM +0200, Gert Menke wrote:
> MD+LVM is very close to what I want, but md has no way to cope with silent
> data corruption. So if I'd want to use a guest filesystem that has no
> checksums either, I'm out of luck.
> I'm honestly a bit confused here - isn't
> -Original Message-
> From: linux-btrfs-ow...@vger.kernel.org [mailto:linux-btrfs-
> ow...@vger.kernel.org] On Behalf Of Brendan Heading
> Sent: Wednesday, 16 September 2015 9:36 PM
> To: Duncan <1i5t5.dun...@cox.net>
> Cc: linux-btrfs@vger.kernel.org
> Subjec
On 2015-09-16 07:35, Brendan Heading wrote:
Btrfs has two possible solutions to work around the problem. The first
one is the autodefrag mount option, which detects file fragmentation
during the write and queues up the affected file for a defragmenting
rewrite by a lower priority worker thread.
> Btrfs has two possible solutions to work around the problem. The first
> one is the autodefrag mount option, which detects file fragmentation
> during the write and queues up the affected file for a defragmenting
> rewrite by a lower priority worker thread. This works best on the small
> end,
As others have said here, it's probably not going to work for you
especially if you want to use regular scheduled btrfs snapshots on the
host (which I consider to be 50% of the reason why I use btrfs in the
first place).
Once I had learned this lesson the hard way, I had a xen server using
If you don't need image portability use an LVM logical volume for
backing of the VM. That LV gets partitioned as if it were a disk, and
you can use Btrfs for root home data or whatever.
If you need image portability, e.g. qcow2, then I'd put it on ext4 or
XFS, and you can use Btrfs within the VM
Gert Menke posted on Tue, 15 Sep 2015 23:34:04 +0200 as excerpted:
> I'm not 100% sure if this is the right place to ask[.]
It is. =:^)
> I want to build a virtualization server to replace my current home
> server. I'm thinking about a Debian system with libvirt/KVM. The system
> will have one
42 matches
Mail list logo