On Mon, Feb 26, 2007 at 06:36:47PM -0800, Richard Elling wrote:
> Jens Elkner wrote:
> >Currently I'm trying to figure out the best zfs layout for a thumper wrt. 
> >to read AND write performance. 
> 
> First things first.  What is the expected workload?  Random, sequential, 
> lots of
> little files, few big files, 1 Byte iops, synchronous data, constantly 
> changing
> access times, ???

Mixed. I.e.
1) as a homes server for student's and staff's ~, so small and big files
   (BTW: what is small and what is big?) as well as compressed/text files
   (you know, the more space people have, the more messier they get ...) -
   target to samba and nfs
2) "app server" in the sence of shared nfs space, where applications get
   installed once and can be used everywhere, e.g. eclipse, soffice,
   jdk*, teX, Pro Engineer, studio 11 and the like. 
   Later I wanna have the same functionality for firefox, thunderbird,
   etc. for windows clients via samba, but this requires a little bit
   ore tweaking to get it work aka time I do not have right now ...
   Anyway, when ~ 30 students start their monster app like eclipse,
   oxygen, soffice at once (what happens in seminars quite frequently),
   I would be lucky to get same performance via nfs as from a local HDD
   ...
3) Video streaming, i.e. capturing as well as broadcasting/editing via
   smb/nfs.

> In general, striped mirror is the best bet for good performance with 
> redundancy.

Yes - thought about doing a 
        mirror c0t0d0 c1t0d0 mirror c4t0d0 c6t0d0 mirror c7t0d0 c0t4d0 \
        mirror c0t1d0 c1t1d0 mirror c4t1d0 c5t1d0 mirror c6t1d0 c7t1d0 \
        mirror c0t2d0 c1t2d0 mirror c4t2d0 c5t2d0 mirror c6t2d0 c7t2d0 \
        mirror c0t3d0 c1t3d0 mirror c4t3d0 c5t3d0 mirror c6t3d0 c7t3d0 \
        mirror c1t4d0 c7t4d0 mirror c4t4d0 c6t4d0 \
        mirror c0t5d0 c1t5d0 mirror c4t5d0 c5t5d0 mirror c6t5d0 c7t5d0 \
        mirror c0t6d0 c1t6d0 mirror c4t6d0 c5t6d0 mirror c6t6d0 c7t6d0 \
        mirror c0t7d0 c1t7d0 mirror c4t7d0 c5t7d0 mirror c6t7d0 c7t7d0
(probably removing 5th line and using those drives for hotspare).

But perhaps it might be better, to split the mirrors into 3 different
pools (but not sure why: my brain says no, my belly says yes ;-)).

> >I did some simple mkfile 512G tests and found out, that per average ~ 500 
> >MB/s  seems to be the maximum on can reach (tried initial default setup, 
> >all 46 HDDs as R0, etc.).
> 
> How many threads?  One mkfile thread may be CPU bound.

Very good point! Using 2 mkfile 256G I got (min/max/av) 473/750/630
MB/s (via zpool iostat 10) with the layout shown above and no
compression enabled. Just to proof it I got with 4 mkfile 128G 407/815/588,
with 3 mkfile 170G 401/788/525, 1 mkfile 512G was 397/557/476.

Regards,
jel.
-- 
Otto-von-Guericke University     http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany         Tel: +49 391 67 12768
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to