Jim Davis wrote:
eric kustarz wrote:
What about adding a whole new RAID-Z vdev and dynamicly stripe across the RAID-Zs? Your capacity and performance will go up with each RAID-Z vdev you add.

Thanks, that's an interesting suggestion.

This has the benefit of allowing you to grow into your storage.
Also, a raid-z 3-vdev set has better reliability than a 4-vdev set.
The performance will be about the same, so if you have 12 vdevs, four
3-vdev sets will perform better and be more reliable than three 4-vdev
sets.  The available space will be smaller, there is no free lunch.
 -- richard

Have you tried using the automounter as suggested by the linux faq?:
http://nfs.sourceforge.net/#section_b

Yes. On our undergrad timesharing system (~1300 logins) we actually hit that limit with a standard automounting scheme. So now we make static mounts of the Netapp /home space and then use amd to make symlinks to the home directories. Ugly, but it works.

<geezer mode>
Solaris folks shouldn't laugh to hard, SunOS 4 had an artificial limit
for the number of client mount points too -- a bug which only read 8kBytes
from the mnttab; if mnttab overflowed you hung.  Fixed many, many years ago
and now mnttab is not actually a file at all ;-)
</geezer mode>
 -- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to