Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Brandon High
On Thu, Apr 28, 2011 at 6:48 PM, Edward Ned Harvey wrote: > What does it mean / what should you do, if you run that command, and it > starts spewing messages like this? > leaked space: vdev 0, offset 0x3bd8096e00, size 7168 I'm not sure there's much you can do about it short of deleting datasets

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Edward Ned Harvey
> From: Neil Perrin [mailto:neil.per...@oracle.com] > > The size of these structures will vary according to the release you're running. > You can always find out the size for a particular system using ::sizeof within > mdb. For example, as super user : > > : xvm-4200m2-02 ; echo ::sizeof ddt_entry

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > What does it mean / what should you do, if you run that command, and it > starts spewing messages like this? > leaked space: vdev 0, offset 0x3bd8096e00, size 7168 And on

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Edward Ned Harvey
> From: Edward Ned Harvey > I saved the core and ran again. This time it spewed "leaked space" messages > for an hour, and completed. But the final result was physically impossible (it > counted up 744k total blocks, which means something like 3Megs per block in > my 2.39T used pool. I checked c

Re: [zfs-discuss] Still no way to recover a "corrupted" pool

2011-04-29 Thread Brandon High
On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash wrote: > Running ZFSv28 on 64-bit FreeBSD 8-STABLE. I'd suggest trying to import the pool into snv_151a (Solaris 11 Express), which is the reference and development platform for ZFS. -B -- Brandon High : bh...@freaks.com ___

Re: [zfs-discuss] Still no way to recover a "corrupted" pool

2011-04-29 Thread Freddie Cash
On Fri, Apr 29, 2011 at 5:00 PM, Alexander J. Maidak wrote: > On Fri, 2011-04-29 at 16:21 -0700, Freddie Cash wrote: >> On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash wrote: >> > Is there anyway, yet, to import a pool with corrupted space_map >> > errors, or "zio-io_type != ZIO_TYPE_WRITE" asserti

Re: [zfs-discuss] Still no way to recover a "corrupted" pool

2011-04-29 Thread Alexander J. Maidak
On Fri, 2011-04-29 at 16:21 -0700, Freddie Cash wrote: > On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash wrote: > > Is there anyway, yet, to import a pool with corrupted space_map > > errors, or "zio-io_type != ZIO_TYPE_WRITE" assertions? >... > Well, by commenting out the VERIFY line for zio->io_t

Re: [zfs-discuss] Still no way to recover a "corrupted" pool

2011-04-29 Thread Freddie Cash
On Fri, Apr 29, 2011 at 1:23 PM, Freddie Cash wrote: > Is there anyway, yet, to import a pool with corrupted space_map > errors, or "zio-io_type != ZIO_TYPE_WRITE" assertions? > > I have a pool comprised of 4 raidz2 vdevs of 6 drives each.  I have > almost 10 TB of data in the pool (3 TB actual di

Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-04-29 Thread Richard Elling
On Apr 29, 2011, at 1:37 PM, Brandon High wrote: > On Fri, Apr 29, 2011 at 10:53 AM, Dan Shelton wrote: >> Is anyone aware of any freeware program that can speed up copying tons of >> data (2 TB) from UFS to ZFS on same server? > > Setting 'sync=disabled' for the initial copy will help, since it

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Erik Trimble
On 4/29/2011 9:44 AM, Brandon High wrote: On Fri, Apr 29, 2011 at 7:10 AM, Roy Sigurd Karlsbakk wrote: This was fletcher4 earlier, and still is in opensolaris/openindiana. Given a combination with verify (which I would use anyway, since there are always tiny chances of collisions), why would

Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-04-29 Thread Brandon High
On Fri, Apr 29, 2011 at 10:53 AM, Dan Shelton wrote: > Is anyone aware of any freeware program that can speed up copying tons of > data (2 TB) from UFS to ZFS on same server? Setting 'sync=disabled' for the initial copy will help, since it will make all writes asynchronous. You will probably wan

Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-04-29 Thread Ian Collins
On 04/30/11 06:00 AM, Freddie Cash wrote: On Fri, Apr 29, 2011 at 10:53 AM, Dan Shelton wrote: Is anyone aware of any freeware program that can speed up copying tons of data (2 TB) from UFS to ZFS on same server? rsync, with --whole-file --inplace (and other options), works well for the initi

[zfs-discuss] Still no way to recover a "corrupted" pool

2011-04-29 Thread Freddie Cash
Is there anyway, yet, to import a pool with corrupted space_map errors, or "zio-io_type != ZIO_TYPE_WRITE" assertions? I have a pool comprised of 4 raidz2 vdevs of 6 drives each. I have almost 10 TB of data in the pool (3 TB actual disk space used due to dedup and compression). While testing var

Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-04-29 Thread Joerg Schilling
Dan Shelton wrote: > Is anyone aware of any freeware program that can speed up copying tons > of data (2 TB) from UFS to ZFS on same server? Try star -copy Note that due to the problems on ZFS to deal with stable states, I recommend to use -no-fsync and it may of course help to specify a

Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-04-29 Thread Freddie Cash
On Fri, Apr 29, 2011 at 10:53 AM, Dan Shelton wrote: > Is anyone aware of any freeware program that can speed up copying tons of > data (2 TB) from UFS to ZFS on same server? rsync, with --whole-file --inplace (and other options), works well for the initial copy. rsync, with --no-whole-file --in

[zfs-discuss] Faster copy from UFS to ZFS

2011-04-29 Thread Dan Shelton
Is anyone aware of any freeware program that can speed up copying tons of data (2 TB) from UFS to ZFS on same server? Thanks. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Brandon High
On Fri, Apr 29, 2011 at 7:10 AM, Roy Sigurd Karlsbakk wrote: > This was fletcher4 earlier, and still is in opensolaris/openindiana. Given a > combination with verify (which I would use anyway, since there are always > tiny chances of collisions), why would sha256 be a better choice? fletcher4

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Roy Sigurd Karlsbakk
> Controls whether deduplication is in effect for a > dataset. The default value is off. The default checksum > used for deduplication is sha256 (subject to change). > > This is from b159. This was fletcher4 earlier, and still is in opensolaris/openindiana. Given a combination with verify (whic

Re: [zfs-discuss] Dedup and L2ARC memory requirements (again)

2011-04-29 Thread Edward Ned Harvey
> From: Richard Elling [mailto:richard.ell...@gmail.com] > > > Worse yet, your arc consumption could be so large, that > > PROCESSES don't fit in ram anymore. In this case, your processes get > pushed > > out to swap space, which is really bad. > > This will not happen. The ARC will be asked to