Scott Lovenberg wrote:
> First Post!
> Sorry, I had to get that out of the way to break the ice...

Welcome!

> I was wondering if it makes sense to zone ZFS pools by disk slice, and if it 
> makes a difference with RAIDZ.  As I'm sure we're all aware, the end of a 
> drive is half as fast as the beginning ([i]where the zoning stipulates that 
> the physical outside is the beginning and going towards the spindle increases 
> hex value[/i]).

IMHO, it makes sense to short-stroke if you are looking for the
best performance.  But raidz (or RAID-5) will not give you the
best performance.  You'd be better off mirroring for performance.

> I usually short stroke my drives so that the variable files on the operating 
> system drive are at the beginning, page in center (so if I'm already in 
> thrashing I'm at most 1/2 a platters width from page), and static files are 
> towards the end.  So, applying this methodology to ZFS, I partition a drive 
> into 4 equal-sized quarters, and do this to 4 drives (each on a separate SATA 
> channel), and then create 4 pools which hold each 'ring' of the drives.  Will 
> I then have 4 RAIDZ pools, which I can mount according to speed needs?  For 
> instance, I always put (in Linux... I'm new to Solaris) '/export/archive' all 
> the way on the slow tracks since I don't read or write to it often and it is 
> almost never accessed at the same time as anything else that would force long 
> strokes.
> 
> Ideally, I'd like to do a straight ZFS on the archive track.  I move data to 
> archive in chunks, 4 gigs at a time - when I roll it in I burn 2 DVDs, 1 gets 
> cataloged locally and the other offsite, so if I lose the data, I don't care 
> - but, ZFS gives me the ability to snapshot to archive (I assume it works 
> across pools?).  Then stripe 1 ring  (I guess this is ZFS native?), 
> /usr/local (or its Solaris equivalent) for performance.  Then mirror the root 
> slice.  Finally, /export would be RAIDZ or RAIDZ2 on the fastest track, 
> holding my source code, large files, and things I want to stream over the LAN.
> 
> Does this make sense with ZFS?  Is the spindle count more of a factor than 
> stroke latency?  Does ZFS balance these things out on its own via random 
> scattering?

Spindle count almost always wins for performance.
Note: bandwidth usually isn't the source of perceived performance
problems, latency is.  We believe that this has implications for
ZFS over time due to COW, but nobody has characterized this yet.

> Reading back over this post, I've found it sounds like the ramblings of a 
> madman.  I guess I know what I want to say, but I'm not sure the right 
> questions to ask.  I think I'm saying:  Will my proposed setup afford me the 
> flexibility to zone for performance since I have a more intimate knowledge of 
> the data going onto the drive, or will brute force by spindle count (I'm 
> planning 4-6 drives - single drive to  a bus) and random placement be 
> sufficient if I just add the whole drive to a single pool?

Yes :-)  YMMV.

> I thank you all for your time and patience as I stumble through this, and I 
> welcome any point of view or insights (especially those from experience!) 
> that might help me decide how to configure my storage server.

KISS.

There are trade-offs for space, performance, and RAS.  We have models
to describe these, so you might check out my blogs on the subject.
        http://blogs.sun.com/relling
  -- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to