On Wed, 2010-04-07 at 10:40 -0700, Jason S wrote:
> I have been searching this forum and just about every ZFS document i can find 
> trying to find the answer to my questions. But i believe the answer i am 
> looking for is not going to be documented and is probably best learned from 
> experience.
> 
> 
> This is my first time playing around with open solaris and ZFS. I am in
>  the midst of replacing my home based filed server. This server hosts
>  all of my media files from MP3's to Blue Ray ISO's. I stream media
>  from this file server to several media players throughout my house. 
> 
> The server consists of a Supermicro X6DHE-XG2 motherboard, 2 X 2.8ghz
>  xeon processors, 4 gigs of ram and 2 Supermicro SAT2MV8 controllers. I
>  have 14 1TB hitachi hard drives connected to the controllers.
> 
If you can at all afford it, upgrade your RAM to 8GB. More than anything
else, I've found that additional RAM makes up for any other deficiencies
with a ZFS setup.  4GB is OK, but 8GB is a pretty sweet spot for
price/performance for a small NAS server.


> My initial thought was to just create a single 14 drive RaidZ2 pool,
>  but i have read over and over again that i should be limiting each
>  array to a max of 9 drives. So then i would end up with 2 X 7 drive
>  RaidZ arrays. 
> 
That's correct. You can certainly do a 14-drive Raidz2, but given how
the access/storage pattern for data is in such a setup, you'll likely
suffer noticeable slowness vs. a 2x7-drive setup.


> To keep the pool size at 12TB i would have to give up my extra parity
>  drive going to this 2 array setup and it is concerning as i have no
>  room for hot spares in this system. So in my mind i am left with only
>  one other choice and this is going to 2XRaidZ2 pools and loosing an
>  additional 2 TB so i am left with a 10TB ZFS pool.
> 
You've pretty much hit it right there.  There is *one* other option:
create a zpool of two raidz1 vdevs: one with 6 drives, and one with 7
drives. Then add a hot spare for the pool.  That will give you most of
the performance of a 2x7 setup, with the capacity of 11 disks.  The
tradeoff is that it's a bit less reliable, as you have to trust the
ability of the hot spare to resilver before any additional drives fail
in degraded array.  For a home NAS, it's likely a reasonable bet,
though.


> So my big question is given that i am working with 4mb - 50gb files is
>  going with 14 spindles going incur a huge performance hit? I was
>  hoping to be able to saturate a single GigE link with this setup, but
>  i am concerned the single large array wont let me achieve this.
> 
Frankly, testing is the only way to be sure. :-)

Writing files that large (and reading them back more frequently, I
assume...) will tend to reduce the differences in performance between a
1x14 and 2x7 setup. One way to keep your 1Gb Ethernet saturated is to
increase the RAM (as noted above). With 8GB of RAM, you should have
enough buffer space in play to mask the differences in large file I/O
between the 1x14 and 2x7 setups. 12GB or 16GB would most certainly erase
pretty much any noticeable difference.

For small random I/O, even with larger amounts of RAM, you'll notice
some difference between the two setups - exactly how noticeable I can't
say, and you'd have to try it to see, as it depends heavily on your
access pattern.

> 
> aaaahhhhhh, decisions, decisions....
> 
> Any advice would be greatly appreciated.


One thing Richard or Bob might be able to answer better is the tradeoff
between getting a cheap/small SSD for L2ARC and buying more RAM. That
is, I don't have a good feel for whether (for your normal usage case),
it would be better to get 8GB of more RAM, or buy something like a cheap
40-60GB SSD for use as an L2ARC (or some combinations of the two).  SSDs
in that size range are $150-200, which is what 8GB of DDR1 ECC RAM will
likely cost.


-- 
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to