Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-05-04 Thread Edward Ned Harvey
> From: Richard Elling [mailto:richard.ell...@gmail.com] > Sent: Friday, April 29, 2011 12:49 AM > > The lower bound of ARC size is c_min > > # kstat -p zfs::arcstats:c_min I see there is another character in the plot: c_max c_max seems to be 80% of system ram (at least on my systems). I assum

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-30 Thread Roy Sigurd Karlsbakk
> And one of these: > Assertion failed: space_map_load(&msp->ms_map, &zdb_space_map_ops, > 0x0, > &msp->ms_smo, spa->spa_meta_objset) == 0, file ../zdb.c, line 1439, > function > zdb_leak_init > Abort (core dumped) > > I saved the core and ran again. This time it spewed "leaked space" > messages >

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-30 Thread Neil Perrin
On 04/30/11 01:41, Sean Sprague wrote: : xvm-4200m2-02 ; I can do the echo | mdb -k. But what is that : xvm-4200 command? My guess is that is a very odd shell prompt ;-) - Indeed ':' means what follows a comment (at least to /bin/ksh) 'xvm-4200m2-02' is the comment - actua

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-30 Thread Sean Sprague
: xvm-4200m2-02 ; I can do the echo | mdb -k. But what is that : xvm-4200 command? My guess is that is a very odd shell prompt ;-) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-d

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Brandon High
On Thu, Apr 28, 2011 at 6:48 PM, Edward Ned Harvey wrote: > What does it mean / what should you do, if you run that command, and it > starts spewing messages like this? > leaked space: vdev 0, offset 0x3bd8096e00, size 7168 I'm not sure there's much you can do about it short of deleting datasets

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Edward Ned Harvey
> From: Neil Perrin [mailto:neil.per...@oracle.com] > > The size of these structures will vary according to the release you're running. > You can always find out the size for a particular system using ::sizeof within > mdb. For example, as super user : > > : xvm-4200m2-02 ; echo ::sizeof ddt_entry

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > What does it mean / what should you do, if you run that command, and it > starts spewing messages like this? > leaked space: vdev 0, offset 0x3bd8096e00, size 7168 And on

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Edward Ned Harvey
> From: Edward Ned Harvey > I saved the core and ran again. This time it spewed "leaked space" messages > for an hour, and completed. But the final result was physically impossible (it > counted up 744k total blocks, which means something like 3Megs per block in > my 2.39T used pool. I checked c

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Erik Trimble
On 4/29/2011 9:44 AM, Brandon High wrote: On Fri, Apr 29, 2011 at 7:10 AM, Roy Sigurd Karlsbakk wrote: This was fletcher4 earlier, and still is in opensolaris/openindiana. Given a combination with verify (which I would use anyway, since there are always tiny chances of collisions), why would

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Brandon High
On Fri, Apr 29, 2011 at 7:10 AM, Roy Sigurd Karlsbakk wrote: > This was fletcher4 earlier, and still is in opensolaris/openindiana. Given a > combination with verify (which I would use anyway, since there are always > tiny chances of collisions), why would sha256 be a better choice? fletcher4

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Roy Sigurd Karlsbakk
> Controls whether deduplication is in effect for a > dataset. The default value is off. The default checksum > used for deduplication is sha256 (subject to change). > > This is from b159. This was fletcher4 earlier, and still is in opensolaris/openindiana. Given a combination with verify (whic

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Edward Ned Harvey
> From: Richard Elling [mailto:richard.ell...@gmail.com] > > > Worse yet, your arc consumption could be so large, that > > PROCESSES don't fit in ram anymore. In this case, your processes get > pushed > > out to swap space, which is really bad. > > This will not happen. The ARC will be asked to

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-28 Thread Richard Elling
[the dog jumped on the keyboard and wiped out my first reply, second attempt...] On Apr 27, 2011, at 9:26 PM, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Neil Perrin >> >> No, that's not true. The DDT is just

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-28 Thread Edward Ned Harvey
> From: Tomas Ögren [mailto:st...@acc.umu.se] > > zdb -bb pool Oy - this is scary - Thank you by the way for that command - I've been gathering statistics across a handful of systems now ... What does it mean / what should you do, if you run that command, and it starts spewing messages like this

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-28 Thread Brandon High
On Thu, Apr 28, 2011 at 3:50 PM, Edward Ned Harvey wrote: > When a block is scheduled to be written, system performs checksum, and looks > for a matching entry in DDT in ARC/L2ARC.  In the event of an ARC/L2ARC ... which, if it's on L2ARC, is another read too. While most people will be using a fa

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-28 Thread Edward Ned Harvey
> From: Brandon High [mailto:bh...@freaks.com] > Sent: Thursday, April 28, 2011 5:33 PM > > On Wed, Apr 27, 2011 at 9:26 PM, Edward Ned Harvey > wrote: > > Correct me if I'm wrong, but the dedup sha256 checksum happens in > addition > > to (not instead of) the fletcher2 integrity checksum.  So af

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-28 Thread Brandon High
On Thu, Apr 28, 2011 at 3:05 PM, Erik Trimble wrote: > A careful reading of the man page seems to imply that there's no way to > change the dedup checksum algorithm from sha256, as the dedup property > ignores the checksum property, and there's no provided way to explicitly > set a checksum algori

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-28 Thread Erik Trimble
On Thu, 2011-04-28 at 14:33 -0700, Brandon High wrote: > On Wed, Apr 27, 2011 at 9:26 PM, Edward Ned Harvey > wrote: > > Correct me if I'm wrong, but the dedup sha256 checksum happens in addition > > to (not instead of) the fletcher2 integrity checksum. So after bootup, > > My understanding is t

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-28 Thread Brandon High
On Wed, Apr 27, 2011 at 9:26 PM, Edward Ned Harvey wrote: > Correct me if I'm wrong, but the dedup sha256 checksum happens in addition > to (not instead of) the fletcher2 integrity checksum.  So after bootup, My understanding is that enabling dedup forces sha256. "The default checksum used for d

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-28 Thread Erik Trimble
On Thu, 2011-04-28 at 13:59 -0600, Neil Perrin wrote: > On 4/28/11 12:45 PM, Edward Ned Harvey wrote: > > > > In any event, thank you both for your input. Can anyone answer these > > authoritatively? (Neil?) I'll send you a pizza. ;-) > > > > - I wouldn't consider myself an authority on the d

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-28 Thread Neil Perrin
On 4/28/11 12:45 PM, Edward Ned Harvey wrote: From: Erik Trimble [mailto:erik.trim...@oracle.com] OK, I just re-looked at a couple of things, and here's what I /think/ is the correct numbers. I just checked, and the current size of this structure is 0x178, or 376 bytes. Each ARC entry, which p

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-28 Thread Edward Ned Harvey
> From: Erik Trimble [mailto:erik.trim...@oracle.com] > > OK, I just re-looked at a couple of things, and here's what I /think/ is > the correct numbers. > > I just checked, and the current size of this structure is 0x178, or 376 > bytes. > > Each ARC entry, which points to either an L2ARC item

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-27 Thread Erik Trimble
OK, I just re-looked at a couple of things, and here's what I /think/ is the correct numbers. A single entry in the DDT is defined in the struct "ddt_entry" : http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/sys/ddt.h#108 I just checked, and the current size of thi

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-27 Thread Richard Elling
On Apr 27, 2011, at 9:26 PM, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Neil Perrin >> >> No, that's not true. The DDT is just like any other ZFS metadata and can > be >> split over the ARC, >> cache device

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-27 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Neil Perrin > > No, that's not true. The DDT is just like any other ZFS metadata and can be > split over the ARC, > cache device (L2ARC) and the main pool devices. An infrequently referenced >

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-27 Thread Tomas Ögren
On 27 April, 2011 - Edward Ned Harvey sent me these 0,6K bytes: > > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > > boun...@opensolaris.org] On Behalf Of Erik Trimble > > > > (BTW, is there any way to get a measurement of number of blocks consumed > > per zpool?  Per vdev?  Per

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-27 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Erik Trimble > > (BTW, is there any way to get a measurement of number of blocks consumed > per zpool?  Per vdev?  Per zfs filesystem?)  *snip*. > > > you need to use zdb to see what the curr

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-26 Thread Roy Sigurd Karlsbakk
- Original Message - > On 04/25/11 11:55, Erik Trimble wrote: > > On 4/25/2011 8:20 AM, Edward Ned Harvey wrote: > > > And one more comment: Based on what's below, it seems that the DDT > > > gets stored on the cache device and also in RAM. Is that correct? > > > What > > > if you didn't ha

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-25 Thread Brandon High
On Mon, Apr 25, 2011 at 8:20 AM, Edward Ned Harvey wrote: > and 128k assuming default recordsize.  (BTW, recordsize seems to be a zfs > property, not a zpool property.  So how can you know or configure the > blocksize for something like a zvol iscsi target?) zvols use the 'volblocksize' property,

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-25 Thread Freddie Cash
On Mon, Apr 25, 2011 at 10:55 AM, Erik Trimble wrote: > Min block size is 512 bytes. Technically, isn't the minimum block size 2^(ashift value)? Thus, on 4 KB disks where the vdevs have an ashift=12, the minimum block size will be 4 KB. -- Freddie Cash fjwc...@gmail.com ___

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-25 Thread Neil Perrin
On 04/25/11 11:55, Erik Trimble wrote: On 4/25/2011 8:20 AM, Edward Ned Harvey wrote: And one more comment: Based on what's below, it seems that the DDT gets stored on the cache device and also in RAM. Is that correct? What if you didn't have a cache device? Shouldn't it *always* be in r

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-25 Thread Erik Trimble
On 4/25/2011 8:20 AM, Edward Ned Harvey wrote: There are a lot of conflicting references on the Internet, so I'd really like to solicit actual experts (ZFS developers or people who have physical evidence) to weigh in on this... After searching around, the reference I found to be the most see

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-25 Thread Roy Sigurd Karlsbakk
> After modifications that I hope are corrections, I think the post > should look like this: > > The rule-of-thumb is 270 bytes/DDT entry, and 200 bytes of ARC for > every L2ARC entry. > > DDT doesn't count for this ARC space usage > > E.g.: I have 1TB of 4k blocks that are to be deduped, and it

[zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-25 Thread Edward Ned Harvey
There are a lot of conflicting references on the Internet, so I'd really like to solicit actual experts (ZFS developers or people who have physical evidence) to weigh in on this... After searching around, the reference I found to be the most seemingly useful was Erik's post here: http://openso