Re: [zfs-discuss] ZFS Performance with Thousands of File Systems

2007-06-30 Thread David Magda
On Jun 30, 2007, at 17:08, Richard Elling wrote: Excellent question. The problem with using file system quotas for a service such as mail store is that you have very little control over implementation of policies. The only thing the mail service knows is that a write a mailbox fails. FWIW I

[zfs-discuss] Re: ZFS Performance with Thousands of File Systems

2007-06-30 Thread Stephen Le
> I think you will find that managing quotas for services is better > when implemented by the service, rather than the file system. Thanks for the suggestion, Richard, but we're very happy with our current mail software, and we'd rather use file system quotas to control inbox sizes (our mail adm

[zfs-discuss] Re: ZFS Performance with Thousands of File Systems

2007-06-30 Thread Stephen Le
(accidentally replied off-list via email, posting message here) We've already considered pooling user quotas together, if that's what you're going to suggest. Pooling user quotas would present a few problems for us, as we have a fair number of users with unique quotas and the general user quota

Re: [zfs-discuss] ZFS Performance with Thousands of File Systems

2007-06-30 Thread Richard Elling
David Magda wrote: On Jun 29, 2007, at 20:51, Stephen Le wrote: I'm investigating the feasibility of migrating from UFS to ZFS for a mail-store supporting 20K users. I need separate quotas for all of my users, which forces me to create separate ZFS file systems for each user. Does each and e

Re: [zfs-discuss] Re: LZO compression?

2007-06-30 Thread Cyril Plisko
On 6/30/07, roland <[EMAIL PROTECTED]> wrote: some other funny benchmark numbers: i wondered how performance/compressratio of lzjb,lzo and gzip would compare if we have optimal compressible datastream. since zfs handles repeating zero`s quite efficiently (i.e. allocating no space) i tried wri

[zfs-discuss] Re: DMU corruption

2007-06-30 Thread Peter Bortas
On 6/30/07, Peter Bortas <[EMAIL PROTECTED]> wrote: I'm currently doing a complete scrub, but according to zpool status latest estimate it will be 63h before I know how that went... The scrub has now completed with 0 errors and the there are no longer any corruption errors reported. -- Peter

[zfs-discuss] Re: LZO compression?

2007-06-30 Thread roland
some other funny benchmark numbers: i wondered how performance/compressratio of lzjb,lzo and gzip would compare if we have optimal compressible datastream. since zfs handles repeating zero`s quite efficiently (i.e. allocating no space) i tried writing non-zero values. the result is quite inter

Re: [zfs-discuss] DMU corruption

2007-06-30 Thread Peter Bortas
On 6/30/07, Matthew Ahrens <[EMAIL PROTECTED]> wrote: Peter Bortas wrote: > According to the zdb dump, object 0 seems to be the DMU node on each > file system. My understanding of this part of ZFS is very shallow, but > why does it allow the filesystems to be mounted rw with damaged DMU > nodes,

Re: [zfs-discuss] Problem in v3 on Solaris 10 and large volume mirroring

2007-06-30 Thread Matthew Ahrens
JS wrote: Solaris 10, u3, zfs 3 kernel 118833-36. Running into a weird problem where I attach a mirror to a large, existing filesytem. The attach occurs and then the system starts swallowing all the available memory and the system performance chokes, while the filesystems sync. In some cases

Re: [zfs-discuss] DMU corruption

2007-06-30 Thread Matthew Ahrens
Peter Bortas wrote: According to the zdb dump, object 0 seems to be the DMU node on each file system. My understanding of this part of ZFS is very shallow, but why does it allow the filesystems to be mounted rw with damaged DMU nodes, doesn't that result in a risk of more permanent damage to the

[zfs-discuss] Problem in v3 on Solaris 10 and large volume mirroring

2007-06-30 Thread JS
Solaris 10, u3, zfs 3 kernel 118833-36. Running into a weird problem where I attach a mirror to a large, existing filesytem. The attach occurs and then the system starts swallowing all the available memory and the system performance chokes, while the filesystems sync. In some cases all memory

Re: [zfs-discuss] ZFS on 32-bit...

2007-06-30 Thread Rob Logan
> How does eeprom(1M) work on the Xeon that the OP said he has? its faked via /boot/solaris/bootenv.rc built into /platform/i86pc/$ISADIR/boot_archive ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/

Re: [zfs-discuss] ZFS on 32-bit...

2007-06-30 Thread David Magda
On Jun 29, 2007, at 23:34, Rob Logan wrote: eeprom kernelbase=0x8000 or for only 1G userland: eeprom kernelbase=0x5000 How does eeprom(1M) work on the Xeon that the OP said he has? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org ht

Re: [zfs-discuss] ZFS Performance with Thousands of File Systems

2007-06-30 Thread David Magda
On Jun 29, 2007, at 20:51, Stephen Le wrote: I'm investigating the feasibility of migrating from UFS to ZFS for a mail-store supporting 20K users. I need separate quotas for all of my users, which forces me to create separate ZFS file systems for each user. Does each and every user have a

[zfs-discuss] DMU corruption

2007-06-30 Thread Peter Bortas
Hello all, After playing around a bit with the disks (powering down, pulling one disk out, powering down putting the disk back in and pulling out another one, repeat) zpool status reports permanent data corruption: # uname -a SunOS bhelliom 5.11 snv_55b i86pc i386 i86pc # zpool status -v pool:

Re: [zfs-discuss] zfs space efficiency

2007-06-30 Thread Mattias Pantzare
2007/6/25, [EMAIL PROTECTED] <[EMAIL PROTECTED]>: >I wouldn't de-duplicate without actually verifying that two blocks were >actually bitwise identical. Absolutely not, indeed. But the nice property of hashes is that if the hashes don't match then the inputs do not either. I.e., the likelyhood