Re: [zfs-discuss] Difference between ZFS and UFS with one LUN from a SAN

2006-12-22 Thread przemolicc
On Thu, Dec 21, 2006 at 04:45:34PM +0100, Robert Milkowski wrote: Hello Shawn, Thursday, December 21, 2006, 4:28:39 PM, you wrote: SJ All, SJ I understand that ZFS gives you more error correction when using SJ two LUNS from a SAN. But, does it provide you with less features SJ than UFS

[zfs-discuss] !

2006-12-22 Thread Ulrich Graef
[EMAIL PROTECTED] wrote: Robert, I don't understand why not loosing any data is an advantage of ZFS. No filesystem should lose any data. It is like saying that an advantage of football player is that he/she plays football (he/she should do that !) or an advantage of chef is that he/she cooks

Re[2]: [zfs-discuss] Difference between ZFS and UFS with one LUN from a SAN

2006-12-22 Thread Robert Milkowski
Hello przemolicc, Friday, December 22, 2006, 10:02:44 AM, you wrote: ppf On Thu, Dec 21, 2006 at 04:45:34PM +0100, Robert Milkowski wrote: Hello Shawn, Thursday, December 21, 2006, 4:28:39 PM, you wrote: SJ All, SJ I understand that ZFS gives you more error correction when using SJ

Re: [zfs-discuss] Re: zfs list and snapshots..

2006-12-22 Thread Robert Milkowski
Hello Wade, Thursday, December 21, 2006, 10:15:56 PM, you wrote: WSfc Hola folks, WSfc I am new to the list, please redirect me if I am posting to the wrong WSfc location. I am starting to use ZFS in production (Solaris x86 10U3 -- WSfc 11/06) and I seem to be seeing unexpected

[zfs-discuss] Re: !

2006-12-22 Thread przemolicc
Ulrich, in his e-mail Robert mentioned _two_ things regarding ZFS: [1] ability to detect errors (checksums) [2] using ZFS didn't caused data lost so far I completely agree that [1] is wonderful and this is huge advantage. And you also underlined [1] in you e-mail ! The _only_ thing I mentioned is

[zfs-discuss] Re: Re: Re: Snapshots impact on performance

2006-12-22 Thread Robert Milkowski
Hi. The problem is getting worse... now even if I destroy all snapshots in a pool I get performance problem even with zil_disable set to 1. Despite that I have limit for maximum nfs threads set to 2048 I get only about 1700. If I want to kill nfsd server it takes 1-4 minutes untill all

[zfs-discuss] Re: Re: Re: Snapshots impact on performance

2006-12-22 Thread Robert Milkowski
bash-3.00# lockstat -kgIW sleep 100 | head -30 Profiling interrupt: 38844 events in 100.098 seconds (388 events/sec) Count genr cuml rcnt nsec Hottest CPU+PILCaller --- 32081 83% 0.00 2432 cpu[1]

[zfs-discuss] Re: Difference between ZFS and UFS with one LUN from a SAN

2006-12-22 Thread Shawn Joy
OK, But lets get back to the original question. Does ZFS provide you with less features than UFS does on one LUN from a SAN (i.e is it less stable). ZFS on the contrary checks every block it reads and is able to find the mirror or reconstruct the data in a raidz config. Therefore ZFS uses only

Re: Re[2]: [zfs-discuss] Difference between ZFS and UFS with one LUN from a SAN

2006-12-22 Thread Roch - PAE
Robert Milkowski writes: Hello przemolicc, Friday, December 22, 2006, 10:02:44 AM, you wrote: ppf On Thu, Dec 21, 2006 at 04:45:34PM +0100, Robert Milkowski wrote: Hello Shawn, Thursday, December 21, 2006, 4:28:39 PM, you wrote: SJ All, SJ I understand that

RE: [zfs-discuss] Re: Difference between ZFS and UFS with one LUN froma SAN

2006-12-22 Thread Tim Cook
This may not be the answer you're looking for, but I don't know if it's something you've thought of. If you're pulling a LUN from an expensive array, with multiple HBA's in the system, why not run mpxio? If you ARE running mpxio, there shouldn't be an issue with a path dropping. I have the

Re: [zfs-discuss] Re: Difference between ZFS and UFS with one LUN froma SAN

2006-12-22 Thread Shawn Joy
No, I have not played with this. As I do not have access to my customer site. They have tested this themselves. It is unclear if they implemented this on a MPXIO/SSTM device. I will ask this question. Thanks, Shawn Tim Cook wrote: This may not be the answer you're looking for, but I don't

[zfs-discuss] Re: RE: What SATA controllers are people using for ZFS?

2006-12-22 Thread Lida Horn
And yes, I would feel better if this driver was open sourced but that is Suns' decision to make. Well, no. That is Marvell's decision to make. Marvell is the one who make the determination that the driver could not be open sourced, not Sun. Since Sun needed information received under

Re: [zfs-discuss] Re: RE: What SATA controllers are people using for ZFS?

2006-12-22 Thread Al Hopper
On Fri, 22 Dec 2006, Lida Horn wrote: And yes, I would feel better if this driver was open sourced but that is Suns' decision to make. Well, no. That is Marvell's decision to make. Marvell is the one who make the determination that the driver could not be open sourced, not Sun.

Re: [zfs-discuss] Re: Difference between ZFS and UFS with one LUN froma SAN

2006-12-22 Thread Jason J. W. Williams
Just for what its worth, when we rebooted a controller in our array (we pre-moved all the LUNs to the other controller), despite using MPXIO ZFS kernel panicked. Verified that all the LUNs were on the correct controller when this occurred. Its not clear why ZFS thought it lost a LUN but it did.

RE: [zfs-discuss] Re: Difference between ZFS and UFS with one LUN froma SAN

2006-12-22 Thread Tim Cook
Always good to hear others experiences J. Maybe I'll try firing up the Nexan today and downing a controller to see how that affects it vs. downing a switch port/pulling cable. My first intuition is time-out values. A cable pull will register differently than a blatant time-out depending on

Re: [zfs-discuss] Re: Difference between ZFS and UFS with one LUN froma SAN

2006-12-22 Thread Jason J. W. Williams
Hi Tim, One switch environment, two ports going to the host, 4 ports going to the storage. Switch is a Brocade SilkWorm 3850 and the HBA is a dual-port QLA2342. Solaris rev is S10 update 3. Array is a StorageTek FLX210 (Engenio 2884) The LUNs had moved to the other controller and MPXIO had

Re: [zfs-discuss] Re: zfs list and snapshots..

2006-12-22 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 12/22/2006 04:50:25 AM: Hello Wade, Thursday, December 21, 2006, 10:15:56 PM, you wrote: WSfc Hola folks, WSfc I am new to the list, please redirect me if I am posting to the wrong WSfc location. I am starting to use ZFS in production (Solaris

[zfs-discuss] Re: !

2006-12-22 Thread Anton B. Rang
Unfortunately there are some cases, where the disks lose data, these cannot be detected by traditional filesystems but with ZFS: * bit rot: some bits on the disk gets flipped (~ 1 in 10^11) * phantom writes: a disk 'forgets' to write data (~ 1 in 10^8) * misdirected reads/writes: disk

Re: [zfs-discuss] Re: !

2006-12-22 Thread Ed Gould
On Dec 22, 2006, at 09:50, Anton B. Rang wrote: Phantom writes and/or misdirected reads/writes: I haven't seen probabilities published on this; obviously the disk vendors would claim zero, but we believe they're slightly wrong. ;-) That said, 1 in 10^8 bits would mean we’d have an

[zfs-discuss] Re: B54 and marvell cards

2006-12-22 Thread Lida Horn
We just put together a new system for ZFS use at a company, and twice in one week we've had the system wedge. You can log on, but the zpools are hosed, and a reboot never occurs if requested since it can't unmount the zfs volumes. So, only a power cycle works. I've tried to reproduce

Re: [zfs-discuss] Difference between ZFS and UFS with one LUN from a SAN

2006-12-22 Thread Torrey McMahon
Roch - PAE wrote: The fact that most FS do not manage the disk write caches does mean you're at risk of data lost for those FS. Does ZFS? I thought it just turned it on in the places where we had previously turned if off. ___ zfs-discuss mailing

Re[2]: [zfs-discuss] Difference between ZFS and UFS with one LUN from a SAN

2006-12-22 Thread Robert Milkowski
Hello Torrey, Friday, December 22, 2006, 9:17:46 PM, you wrote: TM Roch - PAE wrote: The fact that most FS do not manage the disk write caches does mean you're at risk of data lost for those FS. TM Does ZFS? I thought it just turned it on in the places where we had TM previously turned if

[zfs-discuss] Re: Re: zfs list and snapshots..

2006-12-22 Thread Anton B. Rang
Do you have more than one snapshot? If you have a file system a, and create two snapshots [EMAIL PROTECTED] and [EMAIL PROTECTED], then any space shared between the two snapshots does not get accounted for anywhere visible. Only once one of those two is deleted, so that all the space is

Re: [zfs-discuss] Difference between ZFS and UFS with one LUN from a SAN

2006-12-22 Thread Neil Perrin
Robert Milkowski wrote On 12/22/06 13:40,: Hello Torrey, Friday, December 22, 2006, 9:17:46 PM, you wrote: TM Roch - PAE wrote: The fact that most FS do not manage the disk write caches does mean you're at risk of data lost for those FS. TM Does ZFS? I thought it just turned it on in

Re[2]: [zfs-discuss] Re: Difference between ZFS and UFS with one LUN froma SAN

2006-12-22 Thread Robert Milkowski
Hello Jason, Friday, December 22, 2006, 5:55:38 PM, you wrote: JJWW Just for what its worth, when we rebooted a controller in our array JJWW (we pre-moved all the LUNs to the other controller), despite using JJWW MPXIO ZFS kernel panicked. Verified that all the LUNs were on the JJWW correct

RE: Re[2]: [zfs-discuss] Re: Difference between ZFS and UFS with one LUN froma SAN

2006-12-22 Thread Tim Cook
More specifically, if you have the controllers in your array setup in an active/passive setup, and they have a failover timeout of 30 seconds, and the hba's have a failover timeout of 20 seconds, when it goes to failover and cannot write to the disks... I'm sure *bad things* will happen. Again, I

Re: [zfs-discuss] Re: Re: zfs list and snapshots..

2006-12-22 Thread Robert Milkowski
Hello Anton, Friday, December 22, 2006, 10:55:45 PM, you wrote: ABR Do you have more than one snapshot? ABR If you have a file system a, and create two snapshots [EMAIL PROTECTED] ABR and [EMAIL PROTECTED], then any space shared between the two snapshots does ABR not get accounted for anywhere

[zfs-discuss] Remote Replication

2006-12-22 Thread Eric Enright
Hi all, I'm currently investigating solutions for disaster recovery, and would like to go with a zfs-based solution. From what I understand, there are two possible methods of achieving this: an iscsi mirror over a WAN link, and remote replication with incremental zfs send/recv. Due to

[zfs-discuss] Lots of snapshots make scrubbing extremely slow

2006-12-22 Thread Josip Gracin
Hello! I'm generating two snapshots per day on my zfs pool. I've noticed that after a while, scrubbing gets very slow, e.g. taking 12 hours and more on system with cca. 400 snapshots. I think the slowdown is progressive. When I delete most of the snapshots, things get back to normal, i.e.