Re: [zfs-discuss] Recovering from corrupt ZIL

2010-10-19 Thread Roy Sigurd Karlsbakk
- Original Message - First, this is under FreeBSD, but it isn't specific to that OS, and it involves some technical details beyond normal use, so I'm trying my luck here. I have a pool (around version 14) with a corrupted log device that's irrecoverable. I found a tool called

Re: [zfs-discuss] How to avoid striping ?

2010-10-19 Thread Sami Ketola
On 18 Oct 2010, at 17:44, Habony, Zsolt wrote: Thank You all for the comments. You should imagine a datacenter with - standards not completely depending on me. - SAN for many OSs, one of them is Solaris, (and not the major amount) So you get luns from the storage team and there is

[zfs-discuss] SSD partitioned into multiple L2ARC read cache

2010-10-19 Thread Gil Vidals
What would the performance impact be of splitting up a 64 GB SSD into four partitions of 16 GB each versus having the entire SSD dedicated to each pool? Scenario A: 2 TB Mirror w/ 16 GB read cache partition 2 TB Mirror w/ 16 GB read cache partition 2 TB Mirror w/ 16 GB read cache partition 2 TB

Re: [zfs-discuss] Recovering from corrupt ZIL

2010-10-19 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk Last I checked, you lose the pool if you lose the slog on zpool versions 19. I don't think there is a trivial way around this. You should plan for this to be true when

Re: [zfs-discuss] SSD partitioned into multiple L2ARC read cache

2010-10-19 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Gil Vidals What would the performance impact be of splitting up a 64 GB SSD into four partitions of 16 GB each versus having the entire SSD dedicated to each pool? This is a common

[zfs-discuss] rename zpool

2010-10-19 Thread sridhar surampudi
Hi, I have two questions: 1) Is there any way of renaming zpool without export/import ?? 2) If I took hardware snapshot of devices under a zpool ( where the snapshot device will be exact copy including metadata i.e zpool and associated file systems) is there any way to rename zpool name of

[zfs-discuss] migration / vdev balancing

2010-10-19 Thread Trond Michelsen
Hi. I have a pool with 3 raidz1 vdevs (5*1,5TB + 5*1,5TB + 5*1TB), and I want to create 6-disk raidz2 vdevs instead. I've bought 12 2TB drives, and I already have additional 1,5TB and 1TB drives. My cabinet can only hold 24 drives (connected to an LSI SAS controller, and a Supermicro SAS

Re: [zfs-discuss] SSD partitioned into multiple L2ARC read cache

2010-10-19 Thread Bob Friesenhahn
On Tue, 19 Oct 2010, Gil Vidals wrote: What would the performance impact be of splitting up a 64 GB SSD into four partitions of 16 GB each versus having the entire SSD dedicated to each pool? Ignore Edward Ned Harvey's response because he answered the wrong question. For a L2ARC device,

Re: [zfs-discuss] migration / vdev balancing

2010-10-19 Thread Bob Friesenhahn
On Tue, 19 Oct 2010, Trond Michelsen wrote: Anyway - I'm wondering what is the best way to migrate the data in this system? I'm assuming that upgrading a raidz1 vdev to raidz2 is not possible, and I have to create a new pool, zfs send all the datasets and destroy the old pool. Is that correct?

[zfs-discuss] Balancing LVOL fill?

2010-10-19 Thread Roy Sigurd Karlsbakk
Hi all I have this server with some 50TB disk space. It originally had 30TB on WD Greens, was filled quite full, and another storage chassis was added. Now, space problem gone, fine, but what about speed? Three of the VDEVs are quite full, as indicated below. VDEV #3 (the one with the spare

Re: [zfs-discuss] Balancing LVOL fill?

2010-10-19 Thread Roy Sigurd Karlsbakk
obviously, I meant VDEVs, not LVOLs... It's been a long day... - Original Message - Hi all I have this server with some 50TB disk space. It originally had 30TB on WD Greens, was filled quite full, and another storage chassis was added. Now, space problem gone, fine, but what about

Re: [zfs-discuss] SSD partitioned into multiple L2ARC read cache

2010-10-19 Thread Eff Norwood
We tried this in our environment and found that it didn't work out. The more partitions we used, the slower it went. We decided just to use the entire SSD as a read cache and it worked fine. Still has the TRIM issue of course until the next version. -- This message posted from opensolaris.org

Re: [zfs-discuss] What is the 1000 bit?

2010-10-19 Thread Linder, Doug
Nicolas Williams [mailto:nicolas.willi...@oracle.com] wrote: It's the sticky bit. Nowadays it's only useful on directories, and really it's generally only used with 777 permissions. The chmod(1) Thanks. It doesn't seem harmful. But it does make me wonder why it's showing up on my

Re: [zfs-discuss] What is the 1000 bit?

2010-10-19 Thread Tomas Ă–gren
On 19 October, 2010 - Linder, Doug sent me these 1,2K bytes: Nicolas Williams [mailto:nicolas.willi...@oracle.com] wrote: It's the sticky bit. Nowadays it's only useful on directories, and really it's generally only used with 777 permissions. The chmod(1) Thanks. It doesn't seem

Re: [zfs-discuss] SSD partitioned into multiple L2ARC read cache

2010-10-19 Thread Gil Vidals
Based on the answers I received, I will stick to an SSD device fully dedicated to each pool. This means I will have four SSDs and four pools. This seems acceptable to me as it keeps things simpler and if one SSD (L2ARC) fails, the others are still working correctly. Thank you. Gil Vidals On

Re: [zfs-discuss] SSD partitioned into multiple L2ARC read cache

2010-10-19 Thread Roy Sigurd Karlsbakk
- Original Message - Based on the answers I received, I will stick to an SSD device fully dedicated to each pool. This means I will have four SSDs and four pools. This seems acceptable to me as it keeps things simpler and if one SSD (L2ARC) fails, the others are still working

Re: [zfs-discuss] vdev failure - pool loss ?

2010-10-19 Thread Tuomas Leikola
On Mon, Oct 18, 2010 at 8:18 PM, Simon Breden sbre...@gmail.com wrote: So are we all agreed then, that a vdev failure will cause pool loss ? -- unless you use copies=2 or 3, in which case your data is still safe for those datasets that have this option set. -- - Tuomas

Re: [zfs-discuss] Finding corrupted files

2010-10-19 Thread Tuomas Leikola
On Mon, Oct 18, 2010 at 4:55 PM, Edward Ned Harvey sh...@nedharvey.com wrote: Thank you, but, the original question was whether a scrub would identify just corrupt blocks, or if it would be able to map corrupt blocks to a list of corrupt files. Just in case this wasn't already clear. After

Re: [zfs-discuss] rename zpool

2010-10-19 Thread Cindy Swearingen
Hi Sridhar, The answer to the first question is definitely no: No way exists to change a pool name without exporting and importing the pool. I thought we had an open CR that covered renaming pools but I can't find it. The underlying pool devices contain pool information and no easy way exists

Re: [zfs-discuss] vdev failure - pool loss ?

2010-10-19 Thread taemun
Tuomas: My understanding is that the copies functionality doesn't guarantee that the extra copies will be kept on a different vdev. So that isn't entirely true. Unfortunately. On 20 October 2010 07:33, Tuomas Leikola tuomas.leik...@gmail.com wrote: On Mon, Oct 18, 2010 at 8:18 PM, Simon Breden

Re: [zfs-discuss] vdev failure - pool loss ?

2010-10-19 Thread Cindy Swearingen
On 10/19/10 14:33, Tuomas Leikola wrote: On Mon, Oct 18, 2010 at 8:18 PM, Simon Breden sbre...@gmail.com wrote: So are we all agreed then, that a vdev failure will cause pool loss ? -- unless you use copies=2 or 3, in which case your data is still safe for those datasets that have this

[zfs-discuss] NFS/SATA lockups (svc_cots_kdup no slots free sata port time out)

2010-10-19 Thread Ray Van Dolson
I have a Solaris 10 U8 box (142901-14) running as an NFS server with a 23 disk zpool behind it (three RAIDZ2 vdevs). We have a single Intel X-25E SSD operating as an slog ZIL device attached to a SATA port on this machine's motherboard. The rest of the drives are in a hot-swap enclosure.

Re: [zfs-discuss] vdev failure - pool loss ?

2010-10-19 Thread Ross Walker
On Oct 19, 2010, at 4:33 PM, Tuomas Leikola tuomas.leik...@gmail.com wrote: On Mon, Oct 18, 2010 at 8:18 PM, Simon Breden sbre...@gmail.com wrote: So are we all agreed then, that a vdev failure will cause pool loss ? -- unless you use copies=2 or 3, in which case your data is still safe

[zfs-discuss] live upgrade with lots of zfs filesystems -- still broken

2010-10-19 Thread Paul B. Henson
A bit over a year ago I posted about a problem I was having with live upgrade on a system with lots of file systems mounted: http://opensolaris.org/jive/thread.jspa?messageID=411137#411137 An official Sun support call was basically just closed with no resolution. I was quite fortunate that

[zfs-discuss] Newbie ZFS Question: RAM for Dedup

2010-10-19 Thread Never Best
Sorry I couldn't find this anywhere yet. For deduping it is best to have the lookup table in RAM, but I wasn't too sure how much RAM is suggested? ::Assuming 128KB Block Sizes, and 100% unique data: 1TB*1024*1024*1024/128 = 8388608 Blocks ::Each Block needs 8 byte pointer? 8388608*8 = 67108864

Re: [zfs-discuss] vdev failure - pool loss ?

2010-10-19 Thread Bob Friesenhahn
On Tue, 19 Oct 2010, Cindy Swearingen wrote: unless you use copies=2 or 3, in which case your data is still safe for those datasets that have this option set. This advice is a little too optimistic. Increasing the copies property value on datasets might help in some failure scenarios, but

Re: [zfs-discuss] migration / vdev balancing

2010-10-19 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Trond Michelsen Hi. I think everything you said sounds perfectly right. As for estimating the time required to zfs send ... I don't know how badly zfs send gets hurt by the on-disk order or

Re: [zfs-discuss] SSD partitioned into multiple L2ARC read cache

2010-10-19 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Bob Friesenhahn Ignore Edward Ned Harvey's response because he answered the wrong question. Indeed. Although, now that I go back and actually read the question correctly, I wonder why

Re: [zfs-discuss] Newbie ZFS Question: RAM for Dedup

2010-10-19 Thread Peter Jeremy
On 2010-Oct-20 08:36:30 +0800, Never Best qui...@hotmail.com wrote: Sorry I couldn't find this anywhere yet. For deduping it is best to have the lookup table in RAM, but I wasn't too sure how much RAM is suggested? *Lots* ::Assuming 128KB Block Sizes, and 100% unique data: