2012-03-21 7:16, MLR wrote:
I read the "ZFS_Best_Practices_Guide" and "ZFS_Evil_Tuning_Guide", and have some
questions:
1. Cache device for L2ARC
Say we get a decent ssd, ~500MB/s read/write. If we have a 20 HDD zpool
setup shouldn't we be reading at least at the 500MB/s read/write range
On Wed, Mar 21, 2012 at 7:56 AM, Jim Klimov wrote:
> 2012-03-21 7:16, MLR wrote:
> One thing to note is that many people would not recommend using
> a "disbalanced" ZFS array - one expanded by adding a TLVDEV after
> many writes, or one consisting of differently-sized TLVDEVs.
>
> ZFS does a rath
2012-03-21 16:41, Paul Kraus wrote:
I have been running ZFS in a mission critical application since
zpool version 10 and have not seen any issues with some of the vdevs
in a zpool full while others are virtually empty. We have been running
commercial Solaris 10 releases. The configuration wa
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of MLR
>
> Say we get a decent ssd, ~500MB/s read/write. If we have a 20 HDD
zpool
> setup shouldn't we be reading at least at the 500MB/s read/write range?
> Why
> would we want a ~500MB/s c
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of MLR
>
> c. 2vdev of 4 1.5TB disks (raidz). 3vdev of 4 3TB disks (raidz)?
(~500MB/s
> reading, maximize vdevs for performance)
If possible, spread your vdev's across 4 different controllers
On Tue, Mar 20, 2012 at 11:16 PM, MLR wrote:
> 1. Cache device for L2ARC
> Say we get a decent ssd, ~500MB/s read/write. If we have a 20 HDD zpool
> setup shouldn't we be reading at least at the 500MB/s read/write range? Why
> would we want a ~500MB/s cache?
Without knowing the I/O patt
2012-03-21 17:28, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of MLR
...
Am I correct in thinking this means, for example, I have a single
>> 14 disk raidz2 vdev zpool,
It's not advisable to put more than ~8 disks
On Wed, Mar 21, 2012 at 9:51 AM, Jim Klimov wrote:
> 2012-03-21 17:28, Edward Ned Harvey wrote:
>> It's not advisable to put more than ~8 disks in a single vdev, because it
>> really hurts during resilver time. Maybe a week or two to resilver like
>> that.
>
> Yes, that's important to note also.
p...@kraus-haus.org said:
> Without knowing the I/O pattern, saying 500 MB/sec. is meaningless.
> Achieving 500MB/sec. with 8KB files and lots of random accesses is really
> hard, even with 20 HDDs. Achieving 500MB/sec. of sequential streaming of
> 100MB+ files is much easier.
> . . .
> For
2012-03-21 21:40, Marion Hakanson цкщеу:
Small, random read performance does not scale with the number of drives in each
raidz[123] vdev because of the dynamic striping. In order to read a single
logical block, ZFS has to read all the segments of that logical block, which
have been spread out ac
comments below...
On Mar 21, 2012, at 10:40 AM, Marion Hakanson wrote:
> p...@kraus-haus.org said:
>>Without knowing the I/O pattern, saying 500 MB/sec. is meaningless.
>> Achieving 500MB/sec. with 8KB files and lots of random accesses is really
>> hard, even with 20 HDDs. Achieving 500MB/sec
Thank you all for the information, I believe it is much clearer to me.
"Sequential Reads" should scale with the number of disks in the entire
zpool (regardless of amount of vdevs), and "Random Reads" will scale with
just the number of vdevs (aka idea I had before only applies to "Random
Reads"), wh
On Wed, 21 Mar 2012, maillist reader wrote:
I read though that ZFS does not have a "defragmentation" tool, is this still
the case? It would seem with such a
performance difference between "sequential reads" and "random reads" for
raidzN's, a defragmentation tool would be
very high on ZFS's TOD
13 matches
Mail list logo