Hi Richard,
How's the ranch? ;-)
This is most likely a naive question on my part. If recordsize is
set to 4k (or a multiple of 4k), will ZFS ever write a record that
is less than 4k or not a multiple of 4k?
Yes. The recordsize is the upper limit for a file record.
This includes
On Dec 15, 2009, at 6:24 PM, Bill Sommerfeld wrote:
On Tue, 2009-12-15 at 17:28 -0800, Bill Sprouse wrote:
After
running for a while (couple of months) the zpool seems to get
fragmented, backups take 72 hours and a scrub takes about 180
hours.
Are there periodic snapshots being created
Hi Bob,
On Dec 15, 2009, at 6:41 PM, Bob Friesenhahn wrote:
On Tue, 15 Dec 2009, Bill Sprouse wrote:
Hi Everyone,
I hope this is the right forum for this question. A customer is
using a Thumper as an NFS file server to provide the mail store for
multiple email servers (Dovecot
seems like it would make a nice benchmark.
I used zdb -d pool to figure out which filesystems had a lot of
objects, and figured out places to trim based on that.
mike
On Tue, Dec 15, 2009 at 6:41 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
wrote:
On Tue, 15 Dec 2009, Bill Sprouse wrote
This is most likely a naive question on my part. If recordsize is set
to 4k (or a multiple of 4k), will ZFS ever write a record that is less
than 4k or not a multiple of 4k? This includes metadata. Does
compression have any effect on this?
thanks for the help,
bill
Hi Robert
Well, the real question is how 6140 reacts to SYNC_NV
- probably it
doesn't care...
That was our conclusion also, but its really hard to connect the dots...
This message posted from opensolaris.org
___
zfs-discuss mailing list
I'm pretty sure that this bug is fixed in Solaris 10U5, patch 127127-11 and
127128-11 (note: 6462690 sd driver should set SYNC_NV bit when issuing
SYNCHRONIZE CACHE to SBC-2 devices). However, a test system with new 6140
arrays still seems to be suffering from lots of cache flushes. This is
A customer has a zpool where their spectral analysis applications create a ton
(millions?) of very small files that are typically 1858 bytes in length.
They're using ZFS because UFS consistently runs out of inodes. I'm assuming
that ZFS aggregates these little files into recordsize (128K?)
It seems that neither Legato nor NetBackup seem to lend themselves well to the
notion of lots of file systems within storage pools from an administration
perspective. Is there a preferred methodology for doing traditional backups to
tape from ZFS where there are hundreds or thousands of
We are trying to quantify the amount of physical memory that is consumed by
Solaris versus the number of file systems which are mounted within a ZFS pool.
This is for a situation where there would be 15,000 to 20,000 file systems.
Has anyone measured this? I'm assuming U2 or U3 of Solaris 10
10 matches
Mail list logo