Re: [zfs-discuss] Opensolaris with J4400 - Experiences

2009-11-30 Thread Bruno Sousa
Hi Karl, Thank you for all your input, and i will keep this list updated about this project. Regards, Bruno Karl Katzke wrote: Bruno - Sorry, I don't have experience with OpenSolaris, but I *do* have experience running a J4400 with Solaris 10u8. First off, you need a LSI HBA for the

Re: [zfs-discuss] Data repaired, but no errors?

2009-11-30 Thread Brandon High
On Sun, Nov 29, 2009 at 10:38 AM, Brandon High bh...@freaks.com wrote: I recently ran a scrub on a zpool, and it's showing that data was repaired, but the drive doesn't have any read, write or checksum errors. Is this normal behavior, or is something weird going on? All the documentation

[zfs-discuss] FreeNAS 0.7 zfs performance

2009-11-30 Thread Eugen Leitl
Just as a random data point, I have about 80-100 MBit/s write performance to a CIFS share on a 4x 1 TByte Seagate 7200.11 system (all four drives on the same PCI SATA Adaptec at 1.5 GBit), 2 GByte RAM, 2 GHz Athlon 64 with FreeNAS 0.7 (FreeBSD 7.2). This is raidz2. Interface is GBit Ethernet

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-30 Thread James C. McPherson
Tru Huynh wrote: On Sat, Nov 21, 2009 at 07:08:20PM +1000, James C. McPherson wrote: If you and everybody else who is seeing this problem could provide details about your configuration (output from cfgadm -lva, raidctl -l, prtconf -v, what your zpool configs are, and the firmware rev of each

[zfs-discuss] possible mega_sas issue sol10u8 (Re: Workaround for mpt timeouts in snv_127)

2009-11-30 Thread Tru Huynh
Starting a new thead... On Mon, Nov 30, 2009 at 07:59:14PM +1000, James C. McPherson wrote: Tru Huynh wrote: ... On a supermicro board, with 3 hw raid6 vdev joined in a single pool, random hangs (weekly) which required hardware reset, nothing on the logs. symptoms: rpool fine, zfs status

Re: [zfs-discuss] ZFS dedup clarification

2009-11-30 Thread Chavdar Ivanov
2009/11/27 Thomas Maier-Komor tho...@maier-komor.de: Chavdar Ivanov schrieb: Hi, I BFUd successfully snv_128 over snv_125: --- # cat /etc/release                   Solaris Express Community Edition snv_125 X86            Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.        

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-30 Thread Mark Johnson
James C. McPherson wrote: Adam Cheal wrote: I thought you had just set set xpv_psm:xen_support_msi = -1 which is different, because that sets the xen_support_msi variable which lives inside the xpv_psm module. Setting mptsas:* will have no effect on your system if you do not have an mptsas

Re: [zfs-discuss] ARCSTAT Kstat Definitions

2009-11-30 Thread Christophe Lesbats
Dear Saanjeevb, In order to analyze a customer issue I am working on a script that computes ARC and L2ARc activities by interval. My intention is to analyze caches efficiency as well as evictions. I would appreciate more details : about hits breakdown : - Are m[fr]u_ghost_hits a subset of

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-30 Thread Adam Cheal
Can folks confirm/deny each of these? o The problems are not seen with Sun's version of this card On the Thumper x4540 (which uses 6 of the same LSI 1068E controller chips), we do not see this problem. Then again, it uses a one-to-one mapping of controller PHY ports to internal disks; no

Re: [zfs-discuss] Adding drives to system - disk labels not consistent

2009-11-30 Thread Cindy Swearingen
Hi Stuart, Which Solaris release are you seeing this behavior? I would like reproduce it and file a bug, if necessary. Thanks, Cindy On 11/29/09 13:06, Stuart Reid wrote: Answered by own question... When using the -n switch the output is truncated i.e. the d0 is not printed. When actually

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-30 Thread Travis Tabbal
o The problems are not seen with Sun's version of this card Unable to comment as I don't have a Sun card here. If Sun would like to send me one, I would be willing to test it compared to the cards I do have. I'm running Supermicro USAS-L8i cards (LSI 1068e based). o The problems are not

Re: [zfs-discuss] Help needed to find out where the problem is

2009-11-30 Thread Bob Friesenhahn
On Mon, 30 Nov 2009, Carsten Aulbert wrote: after the disk was exchanged, I ran 'zpool clear' and another zpoo scrub afterwards... and guess what, now another vdev shows similar problems: Ugh! Now, the big question is, what could be faulty. fmadm only shows vdev checksum problems, right

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-11-30 Thread Moshe Vainer
Any news on this bug? We are trying to implement write acceleration, but can't deploy to production with this issue still not fixed. If anyone has an estimate (e.g., would it be part of 10.02?) i would very much appreciate to know. Thanks, Moshe -- This message posted from opensolaris.org

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-11-30 Thread Tim Cook
On Mon, Nov 30, 2009 at 2:30 PM, Moshe Vainer mvai...@doyenz.com wrote: Any news on this bug? We are trying to implement write acceleration, but can't deploy to production with this issue still not fixed. If anyone has an estimate (e.g., would it be part of 10.02?) i would very much appreciate

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-11-30 Thread Pablo Méndez Hernández
Hi Moshe: On Mon, Nov 30, 2009 at 20:30, Moshe Vainer mvai...@doyenz.com wrote: Any news on this bug? We are trying to implement write acceleration, but can't deploy to production with this issue still not fixed. If anyone has an estimate (e.g., would it be part of 10.02?) i would very much

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-11-30 Thread Moshe Vainer
I am sorry, i think i confused the matters a bit. I meant the bug that prevents importing with slog device missing, 6733267. I am aware that one can remove a slog device, but if you lose your rpool and the device goes missing while you rebuild, you will lose your pool in its entirety. Not a

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-11-30 Thread Moshe Vainer
I was responding to this: Now I have an exported file system that I cant import because of the log device but the disks are all there. Except the original log device which failed. Which actually means bug #6733267, not the one about slog removal. You can remove now (b125) but only if the pool

Re: [zfs-discuss] FreeNAS 0.7 zfs performance

2009-11-30 Thread Harald Dumdey
Hi Eugen, please have a look at my this blogpost - http://harryd71.blogspot.com/2009/06/benchmark-of-freenas-07-and-single-ssd.html 105 MByte/s Read - 77 MByte/s Write over 1 GBit/s Ethernet is not to bad for a single SSD... Regards, Harry On Mon, Nov 30, 2009 at 10:02 AM, Eugen Leitl

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-30 Thread Carson Gaspar
Mark Johnson wrote: I think there are two different bugs here... I think there is a problem with MSIs and some variant of mpt card on xVM. These seem to be showing up as timeout errors. Disabling MSIs for this adapter seems to fix this problem. For folks seeing this problem, what HBA adapter

Re: [zfs-discuss] FreeNAS 0.7 zfs performance

2009-11-30 Thread Daniel Carosone
I haven't used it myself, but you could look at the EON software NAS appliance: http://eonstorage.blogspot.com/ -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-30 Thread Jeremy Kitchen
On Nov 30, 2009, at 2:14 PM, Carson Gaspar wrote: Mark Johnson wrote: I think there are two different bugs here... I think there is a problem with MSIs and some variant of mpt card on xVM. These seem to be showing up as timeout errors. Disabling MSIs for this adapter seems to fix this

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-30 Thread Carson Gaspar
Carson Gaspar wrote: Mark Johnson wrote: I think there are two different bugs here... I think there is a problem with MSIs and some variant of mpt card on xVM. These seem to be showing up as timeout errors. Disabling MSIs for this adapter seems to fix this problem. For folks seeing this

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-30 Thread James C. McPherson
Hi all, I believe it's an accurate summary of the emails on this thread over the last 18 hours to say that (1) disabling MSI support in xVM makes the problem go away (2) disabling MSI support on bare metal when you only have disks internal to your host (no jbods), makes the problem go

[zfs-discuss] mpt errors on snv 127

2009-11-30 Thread Chad Cantwell
Hi, Sorry for not replying to one of the already open threads on this topic; I've just joined the list for the purposes of this discussion and have nothing in my client to reply to yet. I have an x86_64 opensolaris machine running on a Core 2 Quad Q9650 platform with two LSI SAS3081E-R PCI-E 8

Re: [zfs-discuss] mpt errors on snv 127

2009-11-30 Thread James C. McPherson
Chad Cantwell wrote: Hi, Sorry for not replying to one of the already open threads on this topic; I've just joined the list for the purposes of this discussion and have nothing in my client to reply to yet. I have an x86_64 opensolaris machine running on a Core 2 Quad Q9650 platform with two

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-30 Thread Chad Cantwell
Hi, I just posted a summary of a similiar issue I'm having with non-Sun hardware. For the record, it's in a Chenbro RM41416 chassis with 4 chenbro SAS backplanes but no expanders (each backplane is 4 disks connected by SFF-8087 cable). Each of my LSI brand SAS3081E PCI-E cards is connected to

Re: [zfs-discuss] mpt errors on snv 127

2009-11-30 Thread Chad Cantwell
Hi, Replied to your previous general query already, but in summary, they are in the server chassis. It's a Chenbro 16 hotswap bay case. It has 4 mini backplanes that each connect via an SFF-8087 cable (1m) to my LSI cards (2 cables / 8 drives per card). Chad On Tue, Dec 01, 2009 at

Re: [zfs-discuss] mpt errors on snv 127

2009-11-30 Thread James C. McPherson
Chad Cantwell wrote: Hi, Replied to your previous general query already, but in summary, they are in the server chassis. It's a Chenbro 16 hotswap bay case. It has 4 mini backplanes that each connect via an SFF-8087 cable (1m) to my LSI cards (2 cables / 8 drives per card). Hi Chad, thanks

Re: [zfs-discuss] mpt errors on snv 127

2009-11-30 Thread Chad Cantwell
Hi, The Chenbro chassis contains everything - the motherboard/CPU, and the disks. As far as I know the chenbro backplanes are basically electrical jumpers that the LSI cards shouldn't be aware of. They pass through the SATA signals directly from SFF-8087 cables to the disks. Thanks, Chad

[zfs-discuss] Is write(2) made durable atomically?

2009-11-30 Thread Chris Frost
Will a write(2) to a ZFS file be made durable atomically? Under the hood in ZFS, writes are committed using either shadow paging or logging, as I understand it. So I believe that I mean to ask whether a write(2), pushed to ZPL, and pushed on down the stack, can be split into multiple

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-11-30 Thread George Wilson
Moshe Vainer wrote: I am sorry, i think i confused the matters a bit. I meant the bug that prevents importing with slog device missing, 6733267. I am aware that one can remove a slog device, but if you lose your rpool and the device goes missing while you rebuild, you will lose your pool in

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-30 Thread Travis Tabbal
(1) disabling MSI support in xVM makes the problem go away Yes here. (6) mpt(7d) without MSI support is sloow. That does seem to be the case. It's not so bad overall, and at least the performance is consistent. It would be nice if this were improved. For those of you who have been

Re: [zfs-discuss] Is write(2) made durable atomically?

2009-11-30 Thread Neil Perrin
Under the hood in ZFS, writes are committed using either shadow paging or logging, as I understand it. So I believe that I mean to ask whether a write(2), pushed to ZPL, and pushed on down the stack, can be split into multiple transactions? Or, instead, is it guaranteed to be committed in a

Re: [zfs-discuss] Is write(2) made durable atomically?

2009-11-30 Thread Chris Frost
On Mon, Nov 30, 2009 at 11:03:07PM -0700, Neil Perrin wrote: A write made through the ZPL (zfs_write()) will be broken into transactions that contain at most 128KB user data. So a large write could well be split across transaction groups, and thus committed separately. That answers my exact

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2009-11-30 Thread Rob Logan
Chenbro 16 hotswap bay case. It has 4 mini backplanes that each connect via an SFF-8087 cable StarTech HSB430SATBK hmm, both are passive backplanes with one SATA tunnel per link... no SAS Expanders (LSISASx36) like those found in SuperMicro or J4x00 with 4 links per connection. wonder

Re: [zfs-discuss] Is write(2) made durable atomically?

2009-11-30 Thread Chris Frost
On Mon, Nov 30, 2009 at 10:23:06PM -0800, Chris Frost wrote: On Mon, Nov 30, 2009 at 11:03:07PM -0700, Neil Perrin wrote: A write made through the ZPL (zfs_write()) will be broken into transactions that contain at most 128KB user data. So a large write could well be split across transaction

Re: [zfs-discuss] mpt errors on snv 127

2009-11-30 Thread Chad Cantwell
After another crash I checked the syslog and there were some different errors than the ones I saw previously during operation: Nov 30 20:26:11 the-vault scsi: [ID 107833 kern.warning] WARNING: /p...@0,0/pci8086,2...@3/pci111d,8...@0/pci111d,8...@1/pci1000,3...@0 (mpt1): Nov 30 20:26:11

Re: [zfs-discuss] mpt errors on snv 127

2009-11-30 Thread James C. McPherson
Chad Cantwell wrote: After another crash I checked the syslog and there were some different errors than the ones I saw previously during operation: ... Nov 30 20:59:13 the-vault LSI PCI device (1000,) not supported. ... Nov 30 20:59:13 the-vault mpt_config_space_init failed