I too am having the same issues.  I started out using Solaris 10 8/07 release.  
I could create all the filesystems, 47,000 filesystems, but if you needed to 
reboot, patch, shutdown....  Very bad.  So then I read about sharemgr and how 
it was supposed to mitigate these issues.  Well, after running a process that 
creates zfs filesystems for all our users and then they get shared through 
inherited zfs/nfs permissions I have only 6400 filesystems created after 3 days 
of the process running.  Not very good.  I am going to try and recreate the 
filesystems with sharenfs being done on each filesystem rather than from the 
top of the tree to see if I can improve the speed.  In case you are wondering 
the system I have configured is the following.

Sun v245 with 16 gigs of RAM and 2 procs
3 Apple X-Raids in a RAID-Z config -- each X-Raid is really like 2 direct 
attached... so there are 6 independent connections to the server.  More like a 
RAID-Z set with 6 disks attached.

You may ask why I would have that many filesystems.  Well, as one of the 
posters above pointed out, we had this same setup on an aging sun 450 server.  
We had the 47k user home directories stored on it with quotas enabled.  Well, 
we needed to migrate this to a new system and ZFS seemed like it was the answer 
to all our problems.  We have 16TB raw disk and needed a filesystem that could 
see this, was robust, and very safe/secure.  Enter zfs...maybe.  Well, I can 
see all the disk, I can create all the directories, I have done some load 
testing....Impressive.  I just can't share them.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to