Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-02-02 Thread Ragnar Sundblad

Note:

The below is Edmund's configuration for his setup, which also shows a
good example of having different options for different devices.

It was I who thought this was config from Nextentastor - it was not!

(Sorry for my slow brain, sadly it is the only one I got! :-) )

/ragge

On 1 feb 2012, at 03:06, Ragnar Sundblad wrote:

 
 On 1 feb 2012, at 02:43, Edmund White wrote:
 
 You will definitely want to have a Smart Array card (p411 or p811) on hand
 to update the firmware on the enclosure. Make sure you're on firmware
 version 0131. You may also want to update the disk firmware at the  same
 time.
 
 I have multipath and my drive LEDs work well enough to perform drive
 identification.
 
 Ok, thanks for the tip! I will try that. We are at 0103 currently.
 
 I'm on NexentaStor, though. My scsi_vhci.conf looks like:
 
 scsi-vhci-failover-override =
 HP  EG0300, f_sym,
   HP  MO0400, f_sym,
   HP  DG0300, f_sym,
   HP  DH072, f_sym;
 
 Yes, you have to list all devices that you want to match, except
 for those (pretty few) that the driver itself matches.
 It uses partial string matching, so you can abbreviate to
 match more devices.
 I guess the EG0300 is 300 GB disks, and that the above won't
 match for example the 600 GB drives beginning with EG600.
 
   device-type-mpxio-options-list=
   device-type=HP  EG0300,
 load-balance-options=logical-block-options,
   device-type=HP  DG0300,
 load-balance-options=logical-block-options;
   logical-block-options=load-balance=logical-block,
 region-size=18;
 
 Interesting, they have listed those two in a separate
 device-type-mpxio-options-list instead of setting load-balance
 and region-size globally. I guess they don't want
 load-balance=logical-block for the MO0400 or DH072, whatever
 those are.
 
 /ragge
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-02-01 Thread Jim Klimov

2012-02-01 6:22, Ragnar Sundblad wrote:

That is almost what I do, except that I only have one HBA.
We haven't seen many HBAs fail during the years, none actually, so we
thought it was overkill to double those too. But maybe we are wrong?


Question: if you use two HBAs on different PCI buses to
to MPxIO to the same JBODs, wouldn't this double your
peak performance between motherboard and disks (beside
adding resilience to failure of one of the paths)?

This might be less important with JBODs of HDDs, but
more important with external arrays of SSD disks...
or very many HDDs :)

Thanks in advance for clearing that up for me,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-02-01 Thread Richard Elling

On Feb 1, 2012, at 4:09 AM, Jim Klimov wrote:

 2012-02-01 6:22, Ragnar Sundblad wrote:
 That is almost what I do, except that I only have one HBA.
 We haven't seen many HBAs fail during the years, none actually, so we
 thought it was overkill to double those too. But maybe we are wrong?
 
 Question: if you use two HBAs on different PCI buses to
 to MPxIO to the same JBODs, wouldn't this double your
 peak performance between motherboard and disks (beside
 adding resilience to failure of one of the paths)?

In general, for HDDs no, for SSDs yes.

 This might be less important with JBODs of HDDs, but
 more important with external arrays of SSD disks...
 or very many HDDs :)

With a fast SSD, you can easily get 700+ MB/sec when using mpxio, even
with a single HBA.
 -- richard

--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-02-01 Thread Richard Elling
Thanks for the info, James!

On Jan 31, 2012, at 6:58 PM, James C. McPherson wrote:

 On  1/02/12 12:40 PM, Ragnar Sundblad wrote:
 ...
 I still don't really get what stmsboot -u actually does (and if - and if
 so how much - this differs between x86 and sparc).
 Would it be impolite to ask you to elaborate on this a little?
 
 Not at all. Here goes.
 
 /usr/sbin/stmsboot -u arms the mpxio-upgrade service so that it
 runs when you reboot.
 
 
 The mpxio-upgrade service
 
 #1 execs /lib/stmsboot_util -u, to do the actual rewriting of vfstab
 #2 execs metadevadm if you have any SVM metadevices
 #3 updates your boot archive
 #4 execs dumpadm to ensure that you have the correct dump device
   listed in /etc/dumpadm.conf
 #5 updates your boot path property on x64, if required.

Most or all of these are UFS-oriented. I've never found a need to run 
stmsboot when using ZFS root, even when changing from non-mpxio 
to mpxio.

Incidentally, the process to change from IDE legacy mode to AHCI for the
boot drive is very similar, but the Oracle docs say you have to reinstall the
OS. Clearly we can do that without reinstalling the OS, as shown in the
ZFS-discuss archives.
 -- richard

 
 
 /lib/stmsboot_util is the binary which does the heavy lifting. Each
 vfstab device element is checked - the cache that was created prior
 to the reboot is used to identify where the new paths are. You can
 see this cache by running strings over /etc/mpxio/devid_path.cache.
 
 
 
 This is all available for your perusal at
 
 http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/stmsboot/
 
 
 cheers,
 James
 --
 Oracle
 http://www.jmcp.homeunix.com/blog
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-01-31 Thread Ragnar Sundblad

Just to follow up on this, in case there are others interested:

The D2700s seems to work quite ok for us. We have four issues with them,
all of which we will ignore for now:
- They hang when I insert an Intel SSD SATA (!) disk (I wanted to test,
  both for log device and cache device, and I had those around).
  This could probably be fixed with a firmware upgrade, but:
- It seems the firmware can't be upgraded if you don't have one of a few
  special HP raid cards! Silly!
- The LEDs on the disks: On the first bay it is turned off, on the rest
  it is turned on. They all flash at activity. I have no idea why this
  is, and I know to little about SAS chassises to even guess. This could
  possibly change with a firmware upgrade of the chassis controllers, but
  maybe not.
- In Solaris 11, the /dev/chassis/HP-D2700-SAS-AJ941A.xx.../Drive_bay_NN
  is supposed to contain a soft link to the device for the disk in the bay.
  This doesn't seem to work for bay 0. It may be related to the previous
  problem, but maybe not.

(We may buy a HP raid card just to be able to upgrade their firmware.)

If we have had the time we probably would have tested some other jbods
too, but we need to get those rolling soon, and these seem good enough.

We have tested them with multipathed SAS, using a single LSI SAS 9205-8e
HBA and connecting the two ports on the HBA to the two controllers in the
D2700.

To get multipathing, you need to configure the scsi_vhci driver, in
/kernel/drv/scsi_vhci.conf for sol10 or /etc/driver/drv/scsi_vhci.conf for
sol11-x86. To get better performance, you probably want to use
load-balance=logical-block instead of load-balance=round-robin.
See examples below.

You may also need to run stmsboot -e to enable multipathing. I still haven't
figured out what that does (more than updating /etc/vfstab and /etc/dumpdates
which you typically don't use with ifs), maybe nothing.

Thanks to all that have helped with input!

/ragge


-


For solaris 10u8 and later, in /kernel/drv/scsi_vhci.conf.DIST:
###
...
device-type-scsi-options-list =
  HP  D2700 SAS AJ941A, symmetric-option,
  HP  EG, symmetric-option;
# HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
symmetric-option = 0x100;

device-type-mpxio-options-list =
  device-type=HP  D2700 SAS AJ941A, 
load-balance-options=logical-block-options,
  device-type=HP  EG, load-balance-options=logical-block-options;
# HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
logical-block-options=load-balance=logical-block, region-size=20;
...
###


For solaris 11, in /etc/driver/drv/scsi_vhci.conf on x86
(in /kernel/drv/scsi_vhci.conf.DIST on sparc?):
###
...
#load-balance=round-robin;
load-balance=logical-block;
region-size=20;
...
scsi-vhci-failover-override =
   HP  D2700 SAS AJ941A, f_sym,
   HP  EG,   f_sym;
# HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
###

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-01-31 Thread Hung-Sheng Tsao (laoTsao)
what is the server you attach to D2700?
the hp spec for d2700 did not include solaris, so not sure how you get support 
from hp:-(

Sent from my iPad

On Jan 31, 2012, at 20:25, Ragnar Sundblad ra...@csc.kth.se wrote:

 
 Just to follow up on this, in case there are others interested:
 
 The D2700s seems to work quite ok for us. We have four issues with them,
 all of which we will ignore for now:
 - They hang when I insert an Intel SSD SATA (!) disk (I wanted to test,
  both for log device and cache device, and I had those around).
  This could probably be fixed with a firmware upgrade, but:
 - It seems the firmware can't be upgraded if you don't have one of a few
  special HP raid cards! Silly!
 - The LEDs on the disks: On the first bay it is turned off, on the rest
  it is turned on. They all flash at activity. I have no idea why this
  is, and I know to little about SAS chassises to even guess. This could
  possibly change with a firmware upgrade of the chassis controllers, but
  maybe not.
 - In Solaris 11, the /dev/chassis/HP-D2700-SAS-AJ941A.xx.../Drive_bay_NN
  is supposed to contain a soft link to the device for the disk in the bay.
  This doesn't seem to work for bay 0. It may be related to the previous
  problem, but maybe not.
 
 (We may buy a HP raid card just to be able to upgrade their firmware.)
 
 If we have had the time we probably would have tested some other jbods
 too, but we need to get those rolling soon, and these seem good enough.
 
 We have tested them with multipathed SAS, using a single LSI SAS 9205-8e
 HBA and connecting the two ports on the HBA to the two controllers in the
 D2700.
 
 To get multipathing, you need to configure the scsi_vhci driver, in
 /kernel/drv/scsi_vhci.conf for sol10 or /etc/driver/drv/scsi_vhci.conf for
 sol11-x86. To get better performance, you probably want to use
 load-balance=logical-block instead of load-balance=round-robin.
 See examples below.
 
 You may also need to run stmsboot -e to enable multipathing. I still haven't
 figured out what that does (more than updating /etc/vfstab and /etc/dumpdates
 which you typically don't use with ifs), maybe nothing.
 
 Thanks to all that have helped with input!
 
 /ragge
 
 
 -
 
 
 For solaris 10u8 and later, in /kernel/drv/scsi_vhci.conf.DIST:
 ###
 ...
 device-type-scsi-options-list =
  HP  D2700 SAS AJ941A, symmetric-option,
  HP  EG, symmetric-option;
 # HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
 symmetric-option = 0x100;
 
 device-type-mpxio-options-list =
  device-type=HP  D2700 SAS AJ941A, 
 load-balance-options=logical-block-options,
  device-type=HP  EG, load-balance-options=logical-block-options;
 # HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
 logical-block-options=load-balance=logical-block, region-size=20;
 ...
 ###
 
 
 For solaris 11, in /etc/driver/drv/scsi_vhci.conf on x86
 (in /kernel/drv/scsi_vhci.conf.DIST on sparc?):
 ###
 ...
 #load-balance=round-robin;
 load-balance=logical-block;
 region-size=20;
 ...
 scsi-vhci-failover-override =
   HP  D2700 SAS AJ941A, f_sym,
   HP  EG,   f_sym;
 # HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
 ###
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-01-31 Thread Edmund White
You will definitely want to have a Smart Array card (p411 or p811) on hand
to update the firmware on the enclosure. Make sure you're on firmware
version 0131. You may also want to update the disk firmware at the  same
time.

I have multipath and my drive LEDs work well enough to perform drive
identification.
I'm on NexentaStor, though. My scsi_vhci.conf looks like:

scsi-vhci-failover-override =
HP  EG0300, f_sym,
HP  MO0400, f_sym,
HP  DG0300, f_sym,
HP  DH072, f_sym;

device-type-mpxio-options-list=
device-type=HP  EG0300,
load-balance-options=logical-block-options,
device-type=HP  DG0300,
load-balance-options=logical-block-options;
logical-block-options=load-balance=logical-block,
region-size=18;



-- 
Edmund White
ewwh...@mac.com




On 1/31/12 7:25 PM, Ragnar Sundblad ra...@csc.kth.se wrote:


Just to follow up on this, in case there are others interested:

The D2700s seems to work quite ok for us. We have four issues with them,
all of which we will ignore for now:
- They hang when I insert an Intel SSD SATA (!) disk (I wanted to test,
  both for log device and cache device, and I had those around).
  This could probably be fixed with a firmware upgrade, but:
- It seems the firmware can't be upgraded if you don't have one of a few
  special HP raid cards! Silly!
- The LEDs on the disks: On the first bay it is turned off, on the rest
  it is turned on. They all flash at activity. I have no idea why this
  is, and I know to little about SAS chassises to even guess. This could
  possibly change with a firmware upgrade of the chassis controllers, but
  maybe not.
- In Solaris 11, the /dev/chassis/HP-D2700-SAS-AJ941A.xx.../Drive_bay_NN
  is supposed to contain a soft link to the device for the disk in the
bay.
  This doesn't seem to work for bay 0. It may be related to the previous
  problem, but maybe not.

(We may buy a HP raid card just to be able to upgrade their firmware.)

If we have had the time we probably would have tested some other jbods
too, but we need to get those rolling soon, and these seem good enough.

We have tested them with multipathed SAS, using a single LSI SAS 9205-8e
HBA and connecting the two ports on the HBA to the two controllers in the
D2700.

To get multipathing, you need to configure the scsi_vhci driver, in
/kernel/drv/scsi_vhci.conf for sol10 or /etc/driver/drv/scsi_vhci.conf for
sol11-x86. To get better performance, you probably want to use
load-balance=logical-block instead of load-balance=round-robin.
See examples below.

You may also need to run stmsboot -e to enable multipathing. I still
haven't
figured out what that does (more than updating /etc/vfstab and
/etc/dumpdates
which you typically don't use with ifs), maybe nothing.

Thanks to all that have helped with input!

/ragge


-


For solaris 10u8 and later, in /kernel/drv/scsi_vhci.conf.DIST:
###
...
device-type-scsi-options-list =
  HP  D2700 SAS AJ941A, symmetric-option,
  HP  EG, symmetric-option;
# HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
symmetric-option = 0x100;

device-type-mpxio-options-list =
  device-type=HP  D2700 SAS AJ941A,
load-balance-options=logical-block-options,
  device-type=HP  EG, load-balance-options=logical-block-options;
# HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
logical-block-options=load-balance=logical-block, region-size=20;
...
###


For solaris 11, in /etc/driver/drv/scsi_vhci.conf on x86
(in /kernel/drv/scsi_vhci.conf.DIST on sparc?):
###
...
#load-balance=round-robin;
load-balance=logical-block;
region-size=20;
...
scsi-vhci-failover-override =
   HP  D2700 SAS AJ941A, f_sym,
   HP  EG,   f_sym;
# HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
###

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-01-31 Thread James C. McPherson


Hi Ragge,

On  1/02/12 11:25 AM, Ragnar Sundblad wrote:


Just to follow up on this, in case there are others interested:

...

To get multipathing, you need to configure the scsi_vhci driver, in
/kernel/drv/scsi_vhci.conf for sol10 or /etc/driver/drv/scsi_vhci.conf for
sol11-x86. To get better performance, you probably want to use
load-balance=logical-block instead of load-balance=round-robin.
See examples below.

You may also need to run stmsboot -e to enable multipathing. I still haven't
figured out what that does (more than updating /etc/vfstab and /etc/dumpdates
which you typically don't use with ifs), maybe nothing.


The supported way to enable MPxIO is to run

# /usr/sbin/stmsboot -e

You shouldn't need to do this for mpt_sas HBAs such as
your 9205 controllers; we enable MPxIO by default on them.

If you _do_ edit scsi_vhci.conf, you need to utter

# /usr/sbin/stmsboot -u

in order for those changes to be correctly propagated.

You can (and should) read about this in the stmsboot(1m) manpage,
and there's more information available in my blog post

http://blogs.oracle.com/jmcp/entry/on_stmsboot_1m


James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-01-31 Thread Ragnar Sundblad

On 1 feb 2012, at 02:38, Hung-Sheng Tsao (laoTsao) wrote:

 what is the server you attach to D2700?

It is different Sun/Oracle X4NN0s, x86-86 boxes.

 the hp spec for d2700 did not include solaris, so not sure how you get 
 support from hp:-(


We don't. :-(

/ragge

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-01-31 Thread Ragnar Sundblad

On 1 feb 2012, at 02:43, Edmund White wrote:

 You will definitely want to have a Smart Array card (p411 or p811) on hand
 to update the firmware on the enclosure. Make sure you're on firmware
 version 0131. You may also want to update the disk firmware at the  same
 time.
 
 I have multipath and my drive LEDs work well enough to perform drive
 identification.

Ok, thanks for the tip! I will try that. We are at 0103 currently.

 I'm on NexentaStor, though. My scsi_vhci.conf looks like:
 
 scsi-vhci-failover-override =
 HP  EG0300, f_sym,
HP  MO0400, f_sym,
HP  DG0300, f_sym,
HP  DH072, f_sym;

Yes, you have to list all devices that you want to match, except
for those (pretty few) that the driver itself matches.
It uses partial string matching, so you can abbreviate to
match more devices.
I guess the EG0300 is 300 GB disks, and that the above won't
match for example the 600 GB drives beginning with EG600.

device-type-mpxio-options-list=
device-type=HP  EG0300,
 load-balance-options=logical-block-options,
device-type=HP  DG0300,
 load-balance-options=logical-block-options;
logical-block-options=load-balance=logical-block,
 region-size=18;

Interesting, they have listed those two in a separate
device-type-mpxio-options-list instead of setting load-balance
and region-size globally. I guess they don't want
load-balance=logical-block for the MO0400 or DH072, whatever
those are.

/ragge

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-01-31 Thread Rocky Shek
Ragnar,

Which Intel SSD do you use? We use 320 and 710. We have bad experience with
510 in the past 

Yes, logical-block make it faster in MPxIO setup.

If you are using 9205-8E, you don't need to use stmsboot -e 
By default, mpt_sas driver for 9205-8E is  already MPxIO enable.
stmsboot-e is useful to enable old 3G HBA MPxIO feature.

With MPxIO like the following setup, you can protect HBA, cable, JBOD SAS IO
module failure

http://dataonstorage.com/dataon-solutions/125-unified-storage-system.html

the slot 0 issue is related to their SES mapping in JBOD FW. It seems their
FW is not genius enough with other HBA under solaris 11.
   
Using HP HBA and their tool should fix it.

Rocky

-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Ragnar Sundblad
Sent: Tuesday, January 31, 2012 5:26 PM
To: Ragnar Sundblad
Cc: zfs-discuss@opensolaris.org Discuss
Subject: Re: [zfs-discuss] HP JBOD D2700 - ok?


Just to follow up on this, in case there are others interested:

The D2700s seems to work quite ok for us. We have four issues with them, all
of which we will ignore for now:
- They hang when I insert an Intel SSD SATA (!) disk (I wanted to test,
  both for log device and cache device, and I had those around).
  This could probably be fixed with a firmware upgrade, but:
- It seems the firmware can't be upgraded if you don't have one of a few
  special HP raid cards! Silly!
- The LEDs on the disks: On the first bay it is turned off, on the rest
  it is turned on. They all flash at activity. I have no idea why this
  is, and I know to little about SAS chassises to even guess. This could
  possibly change with a firmware upgrade of the chassis controllers, but
  maybe not.
- In Solaris 11, the /dev/chassis/HP-D2700-SAS-AJ941A.xx.../Drive_bay_NN
  is supposed to contain a soft link to the device for the disk in the bay.
  This doesn't seem to work for bay 0. It may be related to the previous
  problem, but maybe not.

(We may buy a HP raid card just to be able to upgrade their firmware.)

If we have had the time we probably would have tested some other jbods too,
but we need to get those rolling soon, and these seem good enough.

We have tested them with multipathed SAS, using a single LSI SAS 9205-8e HBA
and connecting the two ports on the HBA to the two controllers in the D2700.

To get multipathing, you need to configure the scsi_vhci driver, in
/kernel/drv/scsi_vhci.conf for sol10 or /etc/driver/drv/scsi_vhci.conf for
sol11-x86. To get better performance, you probably want to use
load-balance=logical-block instead of load-balance=round-robin.
See examples below.

You may also need to run stmsboot -e to enable multipathing. I still
haven't figured out what that does (more than updating /etc/vfstab and
/etc/dumpdates which you typically don't use with ifs), maybe nothing.

Thanks to all that have helped with input!

/ragge


-


For solaris 10u8 and later, in /kernel/drv/scsi_vhci.conf.DIST:
###
...
device-type-scsi-options-list =
  HP  D2700 SAS AJ941A, symmetric-option,
  HP  EG, symmetric-option;
# HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR symmetric-option =
0x100;

device-type-mpxio-options-list =
  device-type=HP  D2700 SAS AJ941A,
load-balance-options=logical-block-options,
  device-type=HP  EG, load-balance-options=logical-block-options;
# HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
logical-block-options=load-balance=logical-block, region-size=20; ...
###


For solaris 11, in /etc/driver/drv/scsi_vhci.conf on x86 (in
/kernel/drv/scsi_vhci.conf.DIST on sparc?):
###
...
#load-balance=round-robin;
load-balance=logical-block;
region-size=20;
...
scsi-vhci-failover-override =
   HP  D2700 SAS AJ941A, f_sym,
   HP  EG,   f_sym;
# HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
###

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-01-31 Thread Richard Elling
Hi Edmund,

On Jan 31, 2012, at 5:43 PM, Edmund White wrote:

 You will definitely want to have a Smart Array card (p411 or p811) on hand
 to update the firmware on the enclosure. Make sure you're on firmware
 version 0131. You may also want to update the disk firmware at the  same
 time.
 
 I have multipath and my drive LEDs work well enough to perform drive
 identification.
 I'm on NexentaStor, though. My scsi_vhci.conf looks like:
 
 scsi-vhci-failover-override =
 HP  EG0300, f_sym,
HP  MO0400, f_sym,
HP  DG0300, f_sym,
HP  DH072, f_sym;

Please file a support ticket with Nexenta and ask them to update issue #5664
to add these to the default list.

 
device-type-mpxio-options-list=
device-type=HP  EG0300,
 load-balance-options=logical-block-options,
device-type=HP  DG0300,
 load-balance-options=logical-block-options;
logical-block-options=load-balance=logical-block,
 region-size=18;

IMHO, it is easier to set the default to logical-block in scsi_vhci.conf, which 
is 
exactly what Nexenta issue #5664 does.

NB, the changes for issue #5664 can't be added to the NexentaStor 3.x branch.
Look for them in the next major release. For new installations, you can add the
changes, though.
 -- richard

 
 
 
 -- 
 Edmund White
 ewwh...@mac.com
 
 
 
 
 On 1/31/12 7:25 PM, Ragnar Sundblad ra...@csc.kth.se wrote:
 
 
 Just to follow up on this, in case there are others interested:
 
 The D2700s seems to work quite ok for us. We have four issues with them,
 all of which we will ignore for now:
 - They hang when I insert an Intel SSD SATA (!) disk (I wanted to test,
 both for log device and cache device, and I had those around).
 This could probably be fixed with a firmware upgrade, but:
 - It seems the firmware can't be upgraded if you don't have one of a few
 special HP raid cards! Silly!
 - The LEDs on the disks: On the first bay it is turned off, on the rest
 it is turned on. They all flash at activity. I have no idea why this
 is, and I know to little about SAS chassises to even guess. This could
 possibly change with a firmware upgrade of the chassis controllers, but
 maybe not.
 - In Solaris 11, the /dev/chassis/HP-D2700-SAS-AJ941A.xx.../Drive_bay_NN
 is supposed to contain a soft link to the device for the disk in the
 bay.
 This doesn't seem to work for bay 0. It may be related to the previous
 problem, but maybe not.
 
 (We may buy a HP raid card just to be able to upgrade their firmware.)
 
 If we have had the time we probably would have tested some other jbods
 too, but we need to get those rolling soon, and these seem good enough.
 
 We have tested them with multipathed SAS, using a single LSI SAS 9205-8e
 HBA and connecting the two ports on the HBA to the two controllers in the
 D2700.
 
 To get multipathing, you need to configure the scsi_vhci driver, in
 /kernel/drv/scsi_vhci.conf for sol10 or /etc/driver/drv/scsi_vhci.conf for
 sol11-x86. To get better performance, you probably want to use
 load-balance=logical-block instead of load-balance=round-robin.
 See examples below.
 
 You may also need to run stmsboot -e to enable multipathing. I still
 haven't
 figured out what that does (more than updating /etc/vfstab and
 /etc/dumpdates
 which you typically don't use with ifs), maybe nothing.
 
 Thanks to all that have helped with input!
 
 /ragge
 
 
 -
 
 
 For solaris 10u8 and later, in /kernel/drv/scsi_vhci.conf.DIST:
 ###
 ...
 device-type-scsi-options-list =
 HP  D2700 SAS AJ941A, symmetric-option,
 HP  EG, symmetric-option;
 # HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
 symmetric-option = 0x100;
 
 device-type-mpxio-options-list =
 device-type=HP  D2700 SAS AJ941A,
 load-balance-options=logical-block-options,
 device-type=HP  EG, load-balance-options=logical-block-options;
 # HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
 logical-block-options=load-balance=logical-block, region-size=20;
 ...
 ###
 
 
 For solaris 11, in /etc/driver/drv/scsi_vhci.conf on x86
 (in /kernel/drv/scsi_vhci.conf.DIST on sparc?):
 ###
 ...
 #load-balance=round-robin;
 load-balance=logical-block;
 region-size=20;
 ...
 scsi-vhci-failover-override =
  HP  D2700 SAS AJ941A, f_sym,
  HP  EG,   f_sym;
 # HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
 ###
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422



___
zfs-discuss mailing 

Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-01-31 Thread Ragnar Sundblad

Hello Rocky!

On 1 feb 2012, at 03:07, Rocky Shek wrote:

 Ragnar,
 
 Which Intel SSD do you use? We use 320 and 710. We have bad experience with
 510 in the past 

I tried with Intel X25-M 160 and 80 GB and a X25-E 64 GB (only because that
was what I had in my drawer). I am not sure which one of them that made it
lock up, maybe it was all of them.

Since the head is a X4150 with 8 slots and a plain LSI SAS HBA, I put
them in there instead and went ahead.

 Yes, logical-block make it faster in MPxIO setup.
 
 If you are using 9205-8E, you don't need to use stmsboot -e 
 By default, mpt_sas driver for 9205-8E is  already MPxIO enable.
 stmsboot-e is useful to enable old 3G HBA MPxIO feature.

Ok, thanks for the information, good!
So it just changes the mpxio-disable=yes/no in the driver.conf files?

 With MPxIO like the following setup, you can protect HBA, cable, JBOD SAS IO
 module failure
 
 http://dataonstorage.com/dataon-solutions/125-unified-storage-system.html

That is almost what I do, except that I only have one HBA.
We haven't seen many HBAs fail during the years, none actually, so we
thought it was overkill to double those too. But maybe we are wrong?

 the slot 0 issue is related to their SES mapping in JBOD FW. It seems their
 FW is not genius enough with other HBA under solaris 11.
 
 Using HP HBA and their tool should fix it.

Thanks! I will try to update the firmware in the chassis and see what that
gives. I really hesitate to use HP HBAs - if they have changed anything
from the OEM firmware it is hard to tell how compatible they are.

/ragge

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-01-31 Thread Ragnar Sundblad

Hello James!

On 1 feb 2012, at 02:43, James C. McPherson wrote:

 The supported way to enable MPxIO is to run
 
 # /usr/sbin/stmsboot -e
 
 You shouldn't need to do this for mpt_sas HBAs such as
 your 9205 controllers; we enable MPxIO by default on them.
 
 If you _do_ edit scsi_vhci.conf, you need to utter
 
 # /usr/sbin/stmsboot -u
 
 in order for those changes to be correctly propagated.
 
 You can (and should) read about this in the stmsboot(1m) manpage,
 and there's more information available in my blog post
 
 http://blogs.oracle.com/jmcp/entry/on_stmsboot_1m

Thanks for the info!

I have read the man page a few times, and I actually did read your blog
post too when I started with this and just googled around like crazy.

I still don't really get what stmsboot -u actually does (and if - and if
so how much - this differs between x86 and sparc).
Would it be impolite to ask you to elaborate on this a little?

/ragge

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-01-31 Thread James C. McPherson

On  1/02/12 12:40 PM, Ragnar Sundblad wrote:
...

I still don't really get what stmsboot -u actually does (and if - and if
so how much - this differs between x86 and sparc).
Would it be impolite to ask you to elaborate on this a little?


Not at all. Here goes.

/usr/sbin/stmsboot -u arms the mpxio-upgrade service so that it
runs when you reboot.


The mpxio-upgrade service

#1 execs /lib/stmsboot_util -u, to do the actual rewriting of vfstab
#2 execs metadevadm if you have any SVM metadevices
#3 updates your boot archive
#4 execs dumpadm to ensure that you have the correct dump device
   listed in /etc/dumpadm.conf
#5 updates your boot path property on x64, if required.


/lib/stmsboot_util is the binary which does the heavy lifting. Each
vfstab device element is checked - the cache that was created prior
to the reboot is used to identify where the new paths are. You can
see this cache by running strings over /etc/mpxio/devid_path.cache.



This is all available for your perusal at

http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/stmsboot/


cheers,
James
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2011-11-30 Thread Edmund White
Absolutely. 

I'm using a fully-populated D2700 with an HP ProLiant DL380 G7 server
running NexentaStor.

On the HBA side, I used the LSI 9211-8i 6G controllers for the server's
internal disks (boot, a handful of large disks, Pliant SSDs for L2Arc).
There is also a DDRDrive for ZIL. To connect to the D2700 enclosure, I
used 2 x LSI 9205 6G HBAs; one 4-lane SAS cable per storage controller on
the D2700.

These were setup with MPxIO (dual controllers, dual paths, dual-ported
disks) and required a slight bit of tuning of /kernel/drv/scsi_vhci.conf,
but the performance is great now. The enclosure is supported and I've been
able to setup drive slot maps and control disk LED's, etc.

-- 
Edmund White
ewwh...@mac.com
847-530-1605




On 11/30/11 5:27 AM, Ragnar Sundblad ra...@csc.kth.se wrote:


Hello all,

We are thinking about using HP D2700 SAS enclosures with Sun X41xx servers
and Solaris (Solaris 10, at least to begin with).

Has anyone any experience with using those with Solaris and zfs?

What would you recommend for HBAs?
We currently have the Sun branded LSI SAS3801e HBAs (1068e based), which
are 3 Gb/s. Would those (probably) work OK even if we should consider
switching to 6 Gb/s HBAs?
What 6 Gb/s HBA is currently recommended (LSI 920[05]?s).

Thanks for any advice and/or thoughts!

Ragnar Sundblad
Royal Institute of Technology
Stockholm, Sweden

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2011-11-30 Thread Sašo Kiselkov
On 11/30/2011 02:40 PM, Edmund White wrote:
 Absolutely. 
 
 I'm using a fully-populated D2700 with an HP ProLiant DL380 G7 server
 running NexentaStor.
 
 On the HBA side, I used the LSI 9211-8i 6G controllers for the server's
 internal disks (boot, a handful of large disks, Pliant SSDs for L2Arc).
 There is also a DDRDrive for ZIL. To connect to the D2700 enclosure, I
 used 2 x LSI 9205 6G HBAs; one 4-lane SAS cable per storage controller on
 the D2700.
 
 These were setup with MPxIO (dual controllers, dual paths, dual-ported
 disks) and required a slight bit of tuning of /kernel/drv/scsi_vhci.conf,
 but the performance is great now. The enclosure is supported and I've been
 able to setup drive slot maps and control disk LED's, etc.
 

Coincidentally, I'm also thinking about getting a few D2600 enclosures,
but I've been considering attaching them via a pair of HP SC08Ge 6G SAS
HBAs. Has anybody had any experience with these HBAs? According to a few
searches on the Internet, it should be a rebranded LSI9200-8e.

Cheers,
--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2011-11-30 Thread Edmund White
I'd recommend the LSI 9205 over the 9200 simply for the newer chipset and
performance reasons, but the HP card you mention is compatible.

-- 
Edmund White





On 11/30/11 8:06 AM, Sašo Kiselkov skiselkov...@gmail.com wrote:

On 11/30/2011 02:40 PM, Edmund White wrote:
 Absolutely. 
 
 I'm using a fully-populated D2700 with an HP ProLiant DL380 G7 server
 running NexentaStor.
 
 On the HBA side, I used the LSI 9211-8i 6G controllers for the server's
 internal disks (boot, a handful of large disks, Pliant SSDs for L2Arc).
 There is also a DDRDrive for ZIL. To connect to the D2700 enclosure, I
 used 2 x LSI 9205 6G HBAs; one 4-lane SAS cable per storage controller
on
 the D2700.
 
 These were setup with MPxIO (dual controllers, dual paths, dual-ported
 disks) and required a slight bit of tuning of
/kernel/drv/scsi_vhci.conf,
 but the performance is great now. The enclosure is supported and I've
been
 able to setup drive slot maps and control disk LED's, etc.
 

Coincidentally, I'm also thinking about getting a few D2600 enclosures,
but I've been considering attaching them via a pair of HP SC08Ge 6G SAS
HBAs. Has anybody had any experience with these HBAs? According to a few
searches on the Internet, it should be a rebranded LSI9200-8e.

Cheers,
--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2011-11-30 Thread Rocky Shek
Edmund,

Yes, we also recommend 9205 and we have migrate to 9205 from 9200 for a
while and they are working good for our customers.  

Rocky
 
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Edmund White
Sent: Wednesday, November 30, 2011 6:13 AM
To: Sašo Kiselkov
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] HP JBOD D2700 - ok?

I'd recommend the LSI 9205 over the 9200 simply for the newer chipset and
performance reasons, but the HP card you mention is compatible.

-- 
Edmund White





On 11/30/11 8:06 AM, Sašo Kiselkov skiselkov...@gmail.com wrote:

On 11/30/2011 02:40 PM, Edmund White wrote:
 Absolutely. 
 
 I'm using a fully-populated D2700 with an HP ProLiant DL380 G7 server
 running NexentaStor.
 
 On the HBA side, I used the LSI 9211-8i 6G controllers for the server's
 internal disks (boot, a handful of large disks, Pliant SSDs for L2Arc).
 There is also a DDRDrive for ZIL. To connect to the D2700 enclosure, I
 used 2 x LSI 9205 6G HBAs; one 4-lane SAS cable per storage controller
on
 the D2700.
 
 These were setup with MPxIO (dual controllers, dual paths, dual-ported
 disks) and required a slight bit of tuning of
/kernel/drv/scsi_vhci.conf,
 but the performance is great now. The enclosure is supported and I've
been
 able to setup drive slot maps and control disk LED's, etc.
 

Coincidentally, I'm also thinking about getting a few D2600 enclosures,
but I've been considering attaching them via a pair of HP SC08Ge 6G SAS
HBAs. Has anybody had any experience with these HBAs? According to a few
searches on the Internet, it should be a rebranded LSI9200-8e.

Cheers,
--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2011-11-30 Thread Ragnar Sundblad

On 30 nov 2011, at 14:40, Edmund White wrote:

 Absolutely. 
 
 I'm using a fully-populated D2700 with an HP ProLiant DL380 G7 server
 running NexentaStor.
 
 On the HBA side, I used the LSI 9211-8i 6G controllers for the server's
 internal disks (boot, a handful of large disks, Pliant SSDs for L2Arc).
 There is also a DDRDrive for ZIL. To connect to the D2700 enclosure, I
 used 2 x LSI 9205 6G HBAs; one 4-lane SAS cable per storage controller on
 the D2700.
 
 These were setup with MPxIO (dual controllers, dual paths, dual-ported
 disks) and required a slight bit of tuning of /kernel/drv/scsi_vhci.conf,
 but the performance is great now. The enclosure is supported and I've been
 able to setup drive slot maps and control disk LED's, etc.

Thanks a lot to all of you who have responded, it is really a big help!

Edmund, would you mind sharing your tweaks in /kernel/drv/scsi_vhci.conf?
Being able to control the LEDs could be really useful - what do you use
for doing that (I guess luxadm is not the choice any more :-)?)?

It sounds like LSI 9205 is a good choice then. Is there any special
firmware version(s) to hunt for and/or beware of?

Again, thanks all of you for your help!

/ragge

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2011-11-30 Thread Gary
I'd be wary of purchasing HP HBAs without getting a firsthand report
from someone that they're compatible. I've seen several HP controllers
that use LSI chip sets but are crippled in that they won't present
drives as JBOD. That said, I've used a few of the HBAs sourced from
LSI resellers and they work wonderfully with ZFS.

-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2011-11-30 Thread Richard Elling
On Nov 30, 2011, at 6:06 AM, Sašo Kiselkov wrote:
 On 11/30/2011 02:40 PM, Edmund White wrote:
 Absolutely. 
 
 I'm using a fully-populated D2700 with an HP ProLiant DL380 G7 server
 running NexentaStor.
 
 On the HBA side, I used the LSI 9211-8i 6G controllers for the server's
 internal disks (boot, a handful of large disks, Pliant SSDs for L2Arc).
 There is also a DDRDrive for ZIL. To connect to the D2700 enclosure, I
 used 2 x LSI 9205 6G HBAs; one 4-lane SAS cable per storage controller on
 the D2700.
 
 These were setup with MPxIO (dual controllers, dual paths, dual-ported
 disks) and required a slight bit of tuning of /kernel/drv/scsi_vhci.conf,
 but the performance is great now. The enclosure is supported and I've been
 able to setup drive slot maps and control disk LED's, etc.
 
 
 Coincidentally, I'm also thinking about getting a few D2600 enclosures,
 but I've been considering attaching them via a pair of HP SC08Ge 6G SAS
 HBAs. Has anybody had any experience with these HBAs? According to a few
 searches on the Internet, it should be a rebranded LSI9200-8e.

I have tested this configuration with D2600 and D2700 enclosures and the
SC08Ge HBA under ZFS and NexentaStor.  Works fine.
 -- richard

-- 

ZFS and performance consulting
http://www.RichardElling.com
LISA '11, Boston, MA, December 4-9 














___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss