-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Matthew Ahrens wrote:
| I believe this is because sharemgr does an O(number of shares) operation
| whenever you try to share/unshare anything (retrieving the list of shares
| from the kernel to make sure that it isn't/is already shared). I
couldn't
|
I too am having the same issues. I started out using Solaris 10 8/07 release.
I could create all the filesystems, 47,000 filesystems, but if you needed to
reboot, patch, shutdown Very bad. So then I read about sharemgr and how
it was supposed to mitigate these issues. Well, after
On Mon, 28 Jan 2008, Chris wrote:
I did a little bit more digging and found some interesting things. NFS4
Mirror mounts. This would seem to be the most logical option. In this
scenario the client would connect to a single mount /tank/users but would be
able to move through the
New, yes. Aware - probably not.
Given cheap filesystems, users would create many
filesystems was an easy guess, but I somehow don't
think anybody envisioned that users would be creating
tens of thousands of filesystems.
ZFS - too good for it's own good :-p
IMO (and given mails/posts
New, yes. Aware - probably not.
Given cheap filesystems, users would create many filesystems was an easy
guess, but I somehow don't think anybody envisioned that users would be
creating tens of thousands of filesystems.
ZFS - too good for it's own good :-p
This message posted from
I remember reading a discussion where these kind of problems were discussed.
Basically it boils down to everything not being aware of the radical changes
in filesystems concept.
All these things are being worked on, but it might take sometime before
everything is made aware that yes it's no
On Wed, Jan 23, 2008 at 08:02:22AM -0800, Akhilesh Mritunjai wrote:
I remember reading a discussion where these kind of problems were
discussed.
Basically it boils down to everything not being aware of the
radical changes in filesystems concept.
All these things are being worked on, but
Anyone out there using sharenfs=on with a large amount
of filesystems? We have over 1 filesystems all in one
pool. Everything is great until we turn on sharenfs
(zfs set sharenfs=on poolName). Once that is enabled,
zfs create poolName/filesystem takes about 5 minutes to
complete. If nfs