Re: [zfs-discuss] Dedup performance hit

2010-06-14 Thread Richard Elling
Erik is right, more below... On Jun 13, 2010, at 10:17 PM, Erik Trimble wrote: Hernan F wrote: Hello, I tried enabling dedup on a filesystem, and moved files into it to take advantage of it. I had about 700GB of files and left it for some hours. When I returned, only 70GB were moved. I

Re: [zfs-discuss] Snapshots, txgs and performance

2010-06-14 Thread Arne Jansen
Marcelo Leal wrote: Hello there, I think you should share it with the list, if you can, seems like an interesting work. ZFS has some issues with snapshots and spa_sync performance for snapshots deletion. I'm a bit reluctant to post it to the list where it can still be found years from now.

Re: [zfs-discuss] [zfs/zpool] hang at boot

2010-06-14 Thread schatten
Just FYI. The error was that I created the ZFS at the wrong pool. rpool/a/b/c rpool/new I mounted new in a directory of rpoo/ c. Seems like this hierarchical mounting is not working like I thought. ;) -- This message posted from opensolaris.org ___

Re: [zfs-discuss] What happens when unmirrored ZIL log devi ce is removed ungracefully

2010-06-14 Thread R . Eulenberg
Hello I even have this problem on my system. I lost my backup server crashing the system-hd and the ZIL-device. After setting up a new system (osol 2009.06 and updating to the latest osol/dev version with zpool-dedup) I tried to import my backup pool, but I can't. The system tells me there isn't

[zfs-discuss] size of slog device

2010-06-14 Thread Arne Jansen
Hi, I known it's been discussed here more than once, and I read the Evil tuning guide, but I didn't find a definitive statement: There is absolutely no sense in having slog devices larger than then main memory, because it will never be used, right? ZFS will rather flush the txg to disk than

Re: [zfs-discuss] size of slog device

2010-06-14 Thread Thomas Burgess
On Mon, Jun 14, 2010 at 4:41 AM, Arne Jansen sensi...@gmx.net wrote: Hi, I known it's been discussed here more than once, and I read the Evil tuning guide, but I didn't find a definitive statement: There is absolutely no sense in having slog devices larger than then main memory, because it

Re: [zfs-discuss] size of slog device

2010-06-14 Thread Roy Sigurd Karlsbakk
There is absolutely no sense in having slog devices larger than then main memory, because it will never be used, right? ZFS will rather flush the txg to disk than reading back from zil? So there is a guideline to have enough slog to hold about 10 seconds of zil, but the absolute maximum value

Re: [zfs-discuss] Dedup performance hit

2010-06-14 Thread Dennis Clarke
You are severely RAM limited. In order to do dedup, ZFS has to maintain a catalog of every single block it writes and the checksum for that block. This is called the Dedup Table (DDT for short). So, during the copy, ZFS has to (a) read a block from the old filesystem, (b) check the

Re: [zfs-discuss] Dedup performance hit

2010-06-14 Thread remi.urbillac
To add such a device, you would do: 'zpool add tank mycachedevice' Hi Correct me if I'm wrong, but for me the good command should be : 'zpool add tank cache mycachedevice' If you don't use the cache keyword, the device would be added as a classical top level vdev. Remi

Re: [zfs-discuss] size of slog device

2010-06-14 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Arne Jansen There is absolutely no sense in having slog devices larger than then main memory, because it will never be used, right? Also: A TXG is guaranteed to flush within 30 sec. Let's

Re: [zfs-discuss] size of slog device

2010-06-14 Thread Arne Jansen
Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Arne Jansen There is absolutely no sense in having slog devices larger than then main memory, because it will never be used, right? Also: A TXG is guaranteed to

Re: [zfs-discuss] size of slog device

2010-06-14 Thread Arne Jansen
Roy Sigurd Karlsbakk wrote: There is absolutely no sense in having slog devices larger than then main memory, because it will never be used, right? ZFS will rather flush the txg to disk than reading back from zil? So there is a guideline to have enough slog to hold about 10 seconds of zil,

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving ba

2010-06-14 Thread Ross Walker
On Jun 13, 2010, at 2:14 PM, Jan Hellevik opensola...@janhellevik.com wrote: Well, for me it was a cure. Nothing else I tried got the pool back. As far as I can tell, the way to get it back should be to use symlinks to the fdisk partitions on my SSD, but that did not work for me. Using

Re: [zfs-discuss] size of slog device

2010-06-14 Thread Bob Friesenhahn
On Mon, 14 Jun 2010, Roy Sigurd Karlsbakk wrote: There is absolutely no sense in having slog devices larger than then main memory, because it will never be used, right? ZFS will rather flush the txg to disk than reading back from zil? So there is a guideline to have enough slog to hold about 10

Re: [zfs-discuss] Unable to Install 2009.06 on BigAdmin Approved MOBO - FILE SYSTEM FULL

2010-06-14 Thread Cindy Swearingen
Hi Giovanni, My Monday morning guess is that the disk/partition/slices are not optimal for the installation. Can you provide the partition table on the disk that you are attempting to install? Use format--disk--partition--print. You want to put all the disk space in c*t*d*s0. See this

Re: [zfs-discuss] size of slog device

2010-06-14 Thread Roy Sigurd Karlsbakk
- Original Message - On Mon, 14 Jun 2010, Roy Sigurd Karlsbakk wrote: There is absolutely no sense in having slog devices larger than then main memory, because it will never be used, right? ZFS will rather flush the txg to disk than reading back from zil? So there is a guideline

[zfs-discuss] COMSTAR dropouts with dedup enabled

2010-06-14 Thread Matthew Anderson
Hi All, I currently use b134 and COMSTAR to deploy SRP targets for virtual machine storage (VMware ESXi4) and have run into some unusual behaviour when dedup is enabled for a particular LUN. The target seems to lock up (ESX reports it as unavailable) when writing large amount or overwriting

Re: [zfs-discuss] size of slog device

2010-06-14 Thread Bob Friesenhahn
On Mon, 14 Jun 2010, Roy Sigurd Karlsbakk wrote: It is good to keep in mind that only small writes go to the dedicated slog. Large writes to to main store. A succession of that many small writes (to fill RAM/2) is highly unlikely. Also, that the zil is not read back unless the system is

[zfs-discuss] Permament errors in files 0x0

2010-06-14 Thread Jan Ploski
I've been referred to here from the zfs-fuse newsgroup. I have a (non-redundant) pool which is reporting errors that I don't quite understand: # zpool status -v pool: green state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications

Re: [zfs-discuss] Sync Write - ZIL log performance - Feedback for ZFS developers?

2010-06-14 Thread Roy Sigurd Karlsbakk
On 04/10/10 09:28, Edward Ned Harvey wrote: - If synchronous writes are large (32K) and block aligned then the blocks are written directly to the pool and a small record written to the log. Later when the txg commits then the blocks are just linked into the txg. However, this processing

Re: [zfs-discuss] size of slog device

2010-06-14 Thread Neil Perrin
On 06/14/10 12:29, Bob Friesenhahn wrote: On Mon, 14 Jun 2010, Roy Sigurd Karlsbakk wrote: It is good to keep in mind that only small writes go to the dedicated slog. Large writes to to main store. A succession of that many small writes (to fill RAM/2) is highly unlikely. Also, that the zil is

Re: [zfs-discuss] COMSTAR dropouts with dedup enabled

2010-06-14 Thread Brandon High
On Sun, Jun 13, 2010 at 6:58 PM, Matthew Anderson matth...@ihostsolutions.com.au wrote: The problem didn’t seem to occur with only a small amount of data on the LUN (50GB) and happened more frequently as the LUN filled up. I’ve since moved all data to non-dedup LUN’s and I haven’t seen a

Re: [zfs-discuss] COMSTAR dropouts with dedup enabled

2010-06-14 Thread Brandon High
On Mon, Jun 14, 2010 at 1:35 PM, Brandon High bh...@freaks.com wrote: How much memory do you have, and how big is the DDT? You can get the DDT size with 'zdb -DD'. The total count is the sum of duplicate and unique entries. Each entry uses ~ 250 bytes per entry, so the count divided by 4 is a

[zfs-discuss] Scrub issues

2010-06-14 Thread Roy Sigurd Karlsbakk
Hi all It seems zfs scrub is taking a big bit out of I/O when running. During a scrub, sync I/O, such as NFS and iSCSI is mostly useless. Attaching an SLOG and some L2ARC helps this, but still, the problem remains in that the scrub is given full priority. Is this problem known to the

Re: [zfs-discuss] Scrub issues

2010-06-14 Thread Robert Milkowski
On 14/06/2010 22:12, Roy Sigurd Karlsbakk wrote: Hi all It seems zfs scrub is taking a big bit out of I/O when running. During a scrub, sync I/O, such as NFS and iSCSI is mostly useless. Attaching an SLOG and some L2ARC helps this, but still, the problem remains in that the scrub is given

Re: [zfs-discuss] Scrub issues

2010-06-14 Thread Richard Elling
On Jun 14, 2010, at 2:12 PM, Roy Sigurd Karlsbakk wrote: Hi all It seems zfs scrub is taking a big bit out of I/O when running. During a scrub, sync I/O, such as NFS and iSCSI is mostly useless. Attaching an SLOG and some L2ARC helps this, but still, the problem remains in that the scrub

Re: [zfs-discuss] Native ZFS for Linux

2010-06-14 Thread Peter Jeremy
On 2010-Jun-11 17:41:38 +0800, Joerg Schilling joerg.schill...@fokus.fraunhofer.de wrote: PP.S.: Did you know that FreeBSD _includes_ the GPLd Reiserfs in the FreeBSD kernel since a while and that nobody did complain about this, see e.g.:

Re: [zfs-discuss] size of slog device

2010-06-14 Thread Erik Trimble
On 6/14/2010 12:10 PM, Neil Perrin wrote: On 06/14/10 12:29, Bob Friesenhahn wrote: On Mon, 14 Jun 2010, Roy Sigurd Karlsbakk wrote: It is good to keep in mind that only small writes go to the dedicated slog. Large writes to to main store. A succession of that many small writes (to fill

Re: [zfs-discuss] Scrub issues

2010-06-14 Thread George Wilson
Richard Elling wrote: On Jun 14, 2010, at 2:12 PM, Roy Sigurd Karlsbakk wrote: Hi all It seems zfs scrub is taking a big bit out of I/O when running. During a scrub, sync I/O, such as NFS and iSCSI is mostly useless. Attaching an SLOG and some L2ARC helps this, but still, the problem remains

Re: [zfs-discuss] size of slog device

2010-06-14 Thread Richard Elling
On Jun 14, 2010, at 6:35 PM, Erik Trimble wrote: On 6/14/2010 12:10 PM, Neil Perrin wrote: On 06/14/10 12:29, Bob Friesenhahn wrote: On Mon, 14 Jun 2010, Roy Sigurd Karlsbakk wrote: It is good to keep in mind that only small writes go to the dedicated slog. Large writes to to main store. A

Re: [zfs-discuss] size of slog device

2010-06-14 Thread Neil Perrin
On 06/14/10 19:35, Erik Trimble wrote: On 6/14/2010 12:10 PM, Neil Perrin wrote: On 06/14/10 12:29, Bob Friesenhahn wrote: On Mon, 14 Jun 2010, Roy Sigurd Karlsbakk wrote: It is good to keep in mind that only small writes go to the dedicated slog. Large writes to to main store. A succession