On Wed, Oct 14, 2015 at 9:47 PM, Chris Murphy wrote:
>
> For that matter, now that GlusterFS has checksums and snapshots...
Interesting - I haven't kept up with that. Does it actually do
end-to-end checksums? That is, compute the checksum at the time of
storage, store
On Thu, Oct 15, 2015 at 10:40 AM, Rich Freeman
wrote:
> On Wed, Oct 14, 2015 at 9:47 PM, Chris Murphy wrote:
>>
>> For that matter, now that GlusterFS has checksums and snapshots...
>
> Interesting - I haven't kept up with that. Does it
I would not use Raid56 in production. I've tried using it a few
different ways but have run in to trouble with stability and
performance. Raid10 has been working excellently for me.
On Wed, Oct 14, 2015 at 3:19 PM, Sjoerd wrote:
> Hi all,
>
> Is RAID6 still considered
Le 14/10/2015 22:23, Donald Pearson a écrit :
> I would not use Raid56 in production. I've tried using it a few
> different ways but have run in to trouble with stability and
> performance. Raid10 has been working excellently for me.
Hi, could you elaborate on the stability and performance
Hi all,
Is RAID6 still considered unstable so I shouldn't use it in production?
The latest I could find about a test scenario is more than a year ago
(http://marc.merlins.org/perso/btrfs/post_2014-03-23_Btrfs-Raid5-Status.html)
I want to build a new NAS (6 disks of 4TB) on RAID6 and prefer to
Le 14/10/2015 22:53, Donald Pearson a écrit :
> I've used it from 3.8 something to current, it does not handle drive
> failure well at all, which is the point of parity raid. I had a 10disk
> Raid6 array on 4.1.1 and a drive failure put the filesystem in an
> irrecoverable state. Scrub speeds are
I've used it from 3.8 something to current, it does not handle drive
failure well at all, which is the point of parity raid. I had a 10disk
Raid6 array on 4.1.1 and a drive failure put the filesystem in an
irrecoverable state. Scrub speeds are also an order of magnitude or
more slower in my own
On Wed, Oct 14, 2015 at 4:53 PM, Donald Pearson
wrote:
>
> Personally I would still recommend zfs on illumos in production,
> because it's nearly unshakeable and the creative things you can do to
> deal with problems are pretty remarkable. The unfortunate reality is
>
btrfs does handle mixed device sizes really well actually. And you're
right, zfs is limited to the smallest drive x vdev width. The rest
goes unused. You can do things like pre-slice the drives with sparse
files and create zfs on those files, but then you'll load up those
larger drives with a
On Wed, Oct 14, 2015 at 3:15 PM, Rich Freeman
wrote:
> This is the main thing that has kept me away from zfs - you can't
> modify a vdev, like you can with an md array or btrfs.
A possible work around is ZoL (ZFS on Linux) used as a GlusterFS brick.
For that matter,
Sjoerd posted on Wed, 14 Oct 2015 22:19:50 +0200 as excerpted:
> Is RAID6 still considered unstable so I shouldn't use it in production?
> The latest I could find about a test scenario is more than a year ago
> (http://marc.merlins.org/perso/btrfs/post_2014-03-23_Btrfs-Raid5-
Status.html)
>
> I
11 matches
Mail list logo