2011/6/1 Edward Ned Harvey opensolarisisdeadlongliveopensola...@nedharvey.com:
(2) The above is pretty much the best you can do, if your server is going
to be a normal server, handling both reads writes. Because the data and
the meta_data are both stored in the ARC, the data has a tendency
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
So here's what I'm going to do. With arc_meta_limit at 7680M, of which
100M
was consumed naturally, that leaves me 7580 to play with. Call it
7500M.
Divide by 412 bytes,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
(1) I'll push the recordsize back
up to 128k, and then repeat this test something slightly smaller than
128k.
Say, 120k.
Good news. :-) Changing the recordsize made a
Op 26-05-11 13:38, Edward Ned Harvey schreef:
Perhaps a property could be
set, which would store the DDT exclusively on that device.
Oh yes please, let me put my DDT on an SSD.
But what if you loose it (the vdev), would there be a way to reconstruct
the DDT (which you need to be able to delete
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Frank Van Damme
Op 26-05-11 13:38, Edward Ned Harvey schreef:
Perhaps a property could be
set, which would store the DDT exclusively on that device.
Oh yes please, let me put my DDT on
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Frank Van Damme
Op 26-05-11 13:38, Edward Ned Harvey schreef:
But what if you loose it (the vdev), would there be a way to
reconstruct the DDT (which you need to be able to delete old,
On May 27, 2011, at 6:20 AM, Jim Klimov wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Frank Van Damme
Op 26-05-11 13:38, Edward Ned Harvey schreef:
But what if you loose it (the vdev), would there be a way to
From: Daniel Carosone [mailto:d...@geek.com.au]
Sent: Wednesday, May 25, 2011 10:10 PM
These are additional
iops that dedup creates, not ones that it substitutes for others in
roughly equal number.
Hey ZFS developers - Of course there are many ways to possibly address these
issues.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Both the necessity to read write the primary storage pool... That's
very
hurtful.
Actually, I'm seeing two different modes of degradation:
(1) Previously described.
On Thu, May 26, 2011 at 07:38:05AM -0400, Edward Ned Harvey wrote:
From: Daniel Carosone [mailto:d...@geek.com.au]
Sent: Wednesday, May 25, 2011 10:10 PM
These are additional
iops that dedup creates, not ones that it substitutes for others in
roughly equal number.
Hey ZFS
On Thu, May 26, 2011 at 10:25:04AM -0400, Edward Ned Harvey wrote:
(2) Now, in a pool with 2.4M unique blocks and dedup enabled (no verify), a
test file requires 10m38s to write and 2m54s to delete, but with dedup
disabled it only requires 0m40s to write and 0m13s to delete exactly the
same
I've finally returned to this dedup testing project, trying to get a handle
on why performance is so terrible. At the moment I'm re-running tests and
monitoring memory_throttle_count, to see if maybe that's what's causing the
limit. But while that's in progress and I'm still thinking...
I
On Wed, May 25, 2011 at 2:23 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
I've finally returned to this dedup testing project, trying to get a handle
on why performance is so terrible. At the moment I'm re-running tests and
monitoring memory_throttle_count,
From: Matthew Ahrens [mailto:mahr...@delphix.com]
Sent: Wednesday, May 25, 2011 6:50 PM
The DDT is a ZAP object, so it is an on-disk hashtable, free of O(log(n))
rebalancing operations. It is written asynchronously, from syncing
context. That said, for each block written (unique or not),
On Wed, May 25, 2011 at 03:50:09PM -0700, Matthew Ahrens wrote:
That said, for each block written (unique or not), the DDT must be updated,
which means reading and then writing the block that contains that dedup
table entry, and the indirect blocks to get to it. With a reasonably large
DDT,
15 matches
Mail list logo