Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Richard Elling
On May 8, 2011, at 7:56 AM, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey >> >> That could certainly start to explain why my >> arc size arcstats:c never grew to any size I thought seemed reaso

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Neil Perrin
On 05/08/11 09:22, Andrew Gabriel wrote: Toby Thain wrote: On 08/05/11 10:31 AM, Edward Ned Harvey wrote: ... Incidentally, does fsync() and sync return instantly or wait? Cuz "time sync" might product 0 sec every time even if there were something waiting to be flushed to disk. The

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Andrew Gabriel
Toby Thain wrote: On 08/05/11 10:31 AM, Edward Ned Harvey wrote: ... Incidentally, does fsync() and sync return instantly or wait? Cuz "time sync" might product 0 sec every time even if there were something waiting to be flushed to disk. The semantics need to be synchronous. Anything

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > But I'll go tune and test with this knowledge, just to be sure. BTW, here's how to tune it: echo "arc_meta_limit/Z 0x3000" | sudo mdb -kw echo "::arc" | sudo mdb -k

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Edward Ned Harvey
> From: Garrett D'Amore [mailto:garr...@nexenta.com] > > It is tunable, I don't remember the exact tunable name... Arc_metadata_limit > or some such. There it is: echo "::arc" | sudo mdb -k | grep meta_limit arc_meta_limit= 286 MB Looking at my chart earlier in this discussion,

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Garrett D'Amore
It is tunable, I don't remember the exact tunable name... Arc_metadata_limit or some such. -- Garrett D'Amore On May 8, 2011, at 7:37 AM, "Edward Ned Harvey" wrote: >> From: Garrett D'Amore [mailto:garr...@nexenta.com] >> >> Just another data point. The ddt is considered metadata, and by

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > That could certainly start to explain why my > arc size arcstats:c never grew to any size I thought seemed reasonable... Also now that I'm looking closer at arcstats, it

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Toby Thain
On 08/05/11 10:31 AM, Edward Ned Harvey wrote: >... > Incidentally, does fsync() and sync return instantly or wait? Cuz "time > sync" might product 0 sec every time even if there were something waiting to > be flushed to disk. The semantics need to be synchronous. Anything else would be a horribl

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Toby Thain
On 06/05/11 9:17 PM, Erik Trimble wrote: > On 5/6/2011 5:46 PM, Richard Elling wrote: >> ... >> Yes, perhaps a bit longer for recursive destruction, but everyone here >> knows recursion is evil, right? :-) >> -- richard > You, my friend, have obviously never worshipped at the Temple of the > Lam

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Edward Ned Harvey
> From: Garrett D'Amore [mailto:garr...@nexenta.com] > > Just another data point. The ddt is considered metadata, and by default the > arc will not allow more than 1/4 of it to be used for metadata. Are you > still > sure it fits? That's interesting. Is it tunable? That could certainly star

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-08 Thread Edward Ned Harvey
> From: Erik Trimble [mailto:erik.trim...@oracle.com] > > (1) I'm assuming you run your script repeatedly in the same pool, > without deleting the pool. If that is the case, that means that a run of > X+1 should dedup completely with the run of X. E.g. a run with 12 > blocks will dedup the fi