Hello all,
ZFS developers have for a long time stated that ZFS is not intended,
at least not in near term, for clustered environments (that is, having
a pool safely imported by several nodes simultaneously). However,
many people on forums have wished having ZFS features in clusters.
I have some
On Tue, Oct 04, 2011 at 09:28:36PM -0700, Richard Elling wrote:
On Oct 4, 2011, at 4:14 PM, Daniel Carosone wrote:
I sent it twice, because something strange happened on the first send,
to the ashift=12 pool. zfs list -o space showed figures at least
twice those on the source, maybe
The value of zfs_arc_min specified in /etc/system must be over 64MB
(0x400).
Otherwise the setting is ignored. The value is in bytes not pages.
Jim
---
n 10/ 6/11 05:19 AM, Frank Van Damme wrote:
Hello,
quick and stupid question: I'm breaking my head over how to tunz
zfs_arc_min on a
On Mon, Oct 03, 2011 at 07:34:07PM -0400, Edward Ned Harvey wrote:
It is also very similar to running iscsi targets on ZFS,
while letting some other servers use iscsi to connect to the ZFS server.
The SAS, IB and FCoE targets, too..
SAS might be the most directly comparable to replace a
On Wed, Oct 05, 2011 at 08:19:20AM +0400, Jim Klimov wrote:
Hello, Daniel,
Apparently your data is represented by rather small files (thus
many small data blocks)
It's a zvol, default 8k block size, so yes.
, so proportion of metadata is relatively
high, and your4k blocks are now using at
On Sat, 8 Oct 2011, Daniel Carosone wrote:
This isn't about whether the metadata compresses, this is about
whether ZFS is smart enough to use all the space in a 4k block for
metadata, rather than assuming it can fit at best 512 bytes,
regardless of ashift. By packing, I meant packing them full
2011-10-08 7:25, Daniel Carosone пишет:
On Tue, Oct 04, 2011 at 09:28:36PM -0700, Richard Elling wrote:
On Oct 4, 2011, at 4:14 PM, Daniel Carosone wrote:
What is going on? Is there really that much metadata overhead? How
many metadata blocks are needed for each 8k vol block, and are they
On Mon, 10 Oct 2011, Jim Klimov wrote:
Thus I proposed the second idea with a code-only solution
to optimize performance (force user-configured minimal
data block sizes and physical alignments) where metadata
blocks would remain 512 bytes because the pool is formally
ashift=9 - and on-disk data
2011/10/8 James Litchfield jim.litchfi...@oracle.com:
The value of zfs_arc_min specified in /etc/system must be over 64MB
(0x400).
Otherwise the setting is ignored. The value is in bytes not pages.
wel I've now set it to 0x800 and it stubbornly stays at 2048 MB...
--
Frank Van
[exposed organs below…]
On Oct 7, 2011, at 8:25 PM, Daniel Carosone wrote:
On Tue, Oct 04, 2011 at 09:28:36PM -0700, Richard Elling wrote:
On Oct 4, 2011, at 4:14 PM, Daniel Carosone wrote:
I sent it twice, because something strange happened on the first send,
to the ashift=12 pool. zfs
On 10/07/2011 11:02 AM, James Lee wrote:
Hello,
I had a pool made from a single LUN, which I'll call c4t0d0 for the
purposes of this email. We replaced it with another LUN, c4t1d0, to
grow the pool size. Now c4t1d0 is hosed and I'd like to see about
recovering whatever data we can from
On Mon, Oct 10, 2011 at 04:43:30PM -0400, James Lee wrote:
I found an old post by Jeff Bonwick with some code that does EXACTLY
what I was looking for [1]. I had to update the 'label_write' function
to support the newer ZFS interfaces:
That's great!
Would someone in the community please
12 matches
Mail list logo