On Jun 30, 2007, at 17:08, Richard Elling wrote:
Excellent question. The problem with using file system quotas for
a service such as mail store is that you have very little control
over implementation of policies. The only thing the mail service
knows is that a write a mailbox fails.
FWIW I
> I think you will find that managing quotas for services is better
> when implemented by the service, rather than the file system.
Thanks for the suggestion, Richard, but we're very happy with our current mail
software, and we'd rather use file system quotas to control inbox sizes (our
mail adm
(accidentally replied off-list via email, posting message here)
We've already considered pooling user quotas together, if that's what you're
going to suggest. Pooling user quotas would present a few problems for us, as
we have a fair number of users with unique quotas and the general user quota
David Magda wrote:
On Jun 29, 2007, at 20:51, Stephen Le wrote:
I'm investigating the feasibility of migrating from UFS to ZFS for a
mail-store supporting 20K users. I need separate quotas for all of my
users, which forces me to create separate ZFS file systems for each user.
Does each and e
On 6/30/07, roland <[EMAIL PROTECTED]> wrote:
some other funny benchmark numbers:
i wondered how performance/compressratio of lzjb,lzo and gzip would compare if
we have optimal compressible datastream.
since zfs handles repeating zero`s quite efficiently (i.e. allocating no space)
i tried wri
On 6/30/07, Peter Bortas <[EMAIL PROTECTED]> wrote:
I'm currently doing a complete scrub, but according to zpool status
latest estimate it will be 63h before I know how that went...
The scrub has now completed with 0 errors and the there are no longer
any corruption errors reported.
--
Peter
some other funny benchmark numbers:
i wondered how performance/compressratio of lzjb,lzo and gzip would compare if
we have optimal compressible datastream.
since zfs handles repeating zero`s quite efficiently (i.e. allocating no space)
i tried writing non-zero values.
the result is quite inter
On 6/30/07, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Peter Bortas wrote:
> According to the zdb dump, object 0 seems to be the DMU node on each
> file system. My understanding of this part of ZFS is very shallow, but
> why does it allow the filesystems to be mounted rw with damaged DMU
> nodes,
JS wrote:
Solaris 10, u3, zfs 3 kernel 118833-36.
Running into a weird problem where I attach a mirror to a large, existing
filesytem. The attach occurs and then the system starts swallowing all the
available memory and the system performance chokes, while the filesystems sync.
In some cases
Peter Bortas wrote:
According to the zdb dump, object 0 seems to be the DMU node on each
file system. My understanding of this part of ZFS is very shallow, but
why does it allow the filesystems to be mounted rw with damaged DMU
nodes, doesn't that result in a risk of more permanent damage to the
Solaris 10, u3, zfs 3 kernel 118833-36.
Running into a weird problem where I attach a mirror to a large, existing
filesytem. The attach occurs and then the system starts swallowing all the
available memory and the system performance chokes, while the filesystems sync.
In some cases all memory
> How does eeprom(1M) work on the Xeon that the OP said he has?
its faked via /boot/solaris/bootenv.rc
built into /platform/i86pc/$ISADIR/boot_archive
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/
On Jun 29, 2007, at 23:34, Rob Logan wrote:
eeprom kernelbase=0x8000
or for only 1G userland:
eeprom kernelbase=0x5000
How does eeprom(1M) work on the Xeon that the OP said he has?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ht
On Jun 29, 2007, at 20:51, Stephen Le wrote:
I'm investigating the feasibility of migrating from UFS to ZFS for
a mail-store supporting 20K users. I need separate quotas for all
of my users, which forces me to create separate ZFS file systems
for each user.
Does each and every user have a
Hello all,
After playing around a bit with the disks (powering down, pulling one
disk out, powering down putting the disk back in and pulling out
another one, repeat) zpool status reports permanent data corruption:
# uname -a
SunOS bhelliom 5.11 snv_55b i86pc i386 i86pc
# zpool status -v
pool:
2007/6/25, [EMAIL PROTECTED] <[EMAIL PROTECTED]>:
>I wouldn't de-duplicate without actually verifying that two blocks were
>actually bitwise identical.
Absolutely not, indeed.
But the nice property of hashes is that if the hashes don't match then
the inputs do not either.
I.e., the likelyhood
16 matches
Mail list logo