Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Lida Horn
/cxtyd0p0 of=/dev/null and then try pulling out the disk. The dd should return with an I/O error virtually immediately. If it doesn't then ZFS is probably not the issue. You can also issue the command cfgadm and see what it lists as the state(s) of the various disks. Hope that helps, Lida Horn

Re: [zfs-discuss] Problem with AOC-SAT2-MV8

2008-06-30 Thread Lida Horn
Christophe Dupre wrote: Tim, the system is a Silicon Mechanics A266; the motherboard is a SuperMicro H8DM8E-2 I tried pluging the Marvell card in both 133MHz PCI-X slots. In one I get a lockup during install, in the other I get a reset on first boot. Just a shot in the dark, but is it

Re: [zfs-discuss] raid card vs zfs

2008-06-25 Thread Lida Horn
Tim wrote: On Wed, Jun 25, 2008 at 10:44 AM, Bob Friesenhahn [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: I see that the configuration tested in this X4500 writeup only uses the four built-in gigabit ethernet interfaces. This places a natural limit on the amount of

Re: [zfs-discuss] [SOLVED] USB hard to ZFS

2008-06-16 Thread Lida Horn
Andrius wrote: Bob Friesenhahn wrote: On Mon, 16 Jun 2008, Andrius wrote: Thanks! It works. Volume managagement is that thing that does not exist in zfs perhaps and made disk managemet more easy. Thanks for everybody for advices. Volume Manager should be off before creating pools in

Re: [zfs-discuss] Thumper / X4500 marvell driver issues

2008-04-23 Thread Lida Horn
Carson Gaspar wrote: [ Sending this here, as I've publicly complained about this bug on the ZFS list previously, and there have been prior threads related to the fix hitting OpenSolaris ] For those of you who have been suffering marvell device resets and hung I/Os on Sol 10 U4 with NCQ

Re: [zfs-discuss] 'zfs create' hanging

2008-03-10 Thread Lida Horn
Paul Raines wrote: Well, I ran updatemanager and started applying about 64 updates. After the progress meter got about half way it seemed to hang not moving for hours. I finally gave up and did a reboot. But the machine would not reboot. I went in the ILOM and tried 'stop /SYS' but after a

Re: [zfs-discuss] 'zfs create' hanging

2008-03-10 Thread Lida Horn
Marc Bevand wrote: Paul Raines raines at nmr.mgh.harvard.edu writes: Mar 9 03:22:16 raidsrv03 sata: NOTICE: /pci at 0,0/pci1022,7458 at 1/pci11ab,11ab at 1: Mar 9 03:22:16 raidsrv03 port 6: device reset [...] The above repeated a few times but now seems to have stopped. Running 'hd

Re: [zfs-discuss] 3ware support

2008-02-12 Thread Lida Horn
Jason J. W. Williams wrote: X4500 problems seconded. Still having issues with port resets due to the Marvell driver. Though they seem considerably more transient and less likely to lock up the entire systems in the most recent ( b72) OpenSolaris builds. Build 72 is pretty old. The build

Re: [zfs-discuss] 3ware support

2008-02-12 Thread Lida Horn
Carson Gaspar wrote: Tim wrote: A much cheaper (and probably the BEST supported card), is the supermicro based on the marvell chipset. This is the same chipset that is used in the thumper x4500 so you know that the folks at sun are doing their due diligence to make sure the

Re: [zfs-discuss] scrub halts

2008-02-12 Thread Lida Horn
Will Murnane wrote: On Feb 12, 2008 4:45 AM, Lida Horn [EMAIL PROTECTED] wrote: The latest changes to the sata and marvell88sx modules have been put back to Solaris Nevada and should be available in the next build (build 84). Hopefully, those of you who use it will find the changes

Re: [zfs-discuss] scrub halts

2008-02-11 Thread Lida Horn
The latest changes to the sata and marvell88sx modules have been put back to Solaris Nevada and should be available in the next build (build 84). Hopefully, those of you who use it will find the changes helpful. This message posted from opensolaris.org

Re: [zfs-discuss] scrub halts

2008-02-06 Thread Lida Horn
I now have a improved sata and marvell88sx driver modules that deal with various error conditions in a much more solid way. Changes include reducing the number of required device resets, properly reporting media errors (rather than no additional sense), clearing aborted packets more rapidly so

Re: [zfs-discuss] X4500 device disconnect problem persists

2007-11-13 Thread Lida Horn
The reset: no matching NCQ I/O found issue appears to be related to the error recovery for bad blocks on the disk. In general it should be harmless, but I have looked into this. If there is someone out there who; 1) Is hitting this issue, and; 2) Is running recent Solaris Nevada bits (not

Re: [zfs-discuss] Force SATA1 on AOC-SAT2-MV8

2007-11-05 Thread Lida Horn
' There is no way (short of patching the text of the driver) to alter the allowed SATA communication speeds for the marvell88sx driver. If you wish you can request an RFE (Request For Enhancement), but I don't think it will be given high priority. Sorry, Lida Horn Thanks again, Eric

Re: [zfs-discuss] X4500 device disconnect problem persists

2007-10-27 Thread Lida Horn
Stuart Anderson wrote: After applying 125205-07 on two X4500 machines running Sol10U4 and removing set sata:sata_func_enable = 0x5 from /etc/system to re-enable NCQ, I am again observing drive disconnect error messages. This in spite of the patch description which claims multiple fixes in

Re: [zfs-discuss] [storage-discuss] SATA Hotswap

2007-10-22 Thread Lida Horn
Jeff Creek wrote: I posted this in ZFS-Discuss. Eric Schrock suggested I ask in this forum. I am trying to test a new setup of NV74. I have set up the system with ZFS boot. Everything works fine until I pull a drive. The system locks up when I try to run any command e.g. zpool status.

Re: [zfs-discuss] scrub halts

2007-08-05 Thread Lida Horn
?bug_id=6564677 This bug and related fix has nothing to do with the zfs scrub issue. Regardless, I would like to know if this is happening with the marvell88sx driver (and if so, what hardware) or with some other driver and hardware. Regards, Lida Horn James C. McPherson -- Solaris kernel

[zfs-discuss] Re: [storage-discuss] NCQ performance

2007-05-29 Thread Lida Horn
Point one, the comments that Eric made do not give the complete picture. All the tests that Eric's referring to were done through ZFS filesystem. When sequential I/O is done to the disk directly there is no performance degradation at all. Second point, it does not take any additional time in

Re: [zfs-discuss] Re: [storage-discuss] NCQ performance

2007-05-29 Thread Lida Horn
Roch Bourbonnais wrote: Le 29 mai 07 à 22:59, [EMAIL PROTECTED] a écrit : When sequential I/O is done to the disk directly there is no performance degradation at all. All filesystems impose some overhead compared to the rate of raw disk I/O. It's going to be hard to store data on a disk

[zfs-discuss] Re: RE: What SATA controllers are people using for ZFS?

2006-12-22 Thread Lida Horn
And yes, I would feel better if this driver was open sourced but that is Suns' decision to make. Well, no. That is Marvell's decision to make. Marvell is the one who make the determination that the driver could not be open sourced, not Sun. Since Sun needed information received under

[zfs-discuss] Re: B54 and marvell cards

2006-12-22 Thread Lida Horn
We just put together a new system for ZFS use at a company, and twice in one week we've had the system wedge. You can log on, but the zpools are hosed, and a reboot never occurs if requested since it can't unmount the zfs volumes. So, only a power cycle works. I've tried to reproduce