Re: [zfs-discuss] problem adding second MD1000 enclosure to LSI 9200-16e

2011-01-10 Thread Rob Cohen
As a follow-up, I tried a SuperMicro enclosure (SC847E26-RJBOD1).  I have 3 
sets of 15 drives.  I got the same results when I loaded the second set of 
drives (15 to 30).

Then, I tried changing the LSI 9200's BIOS setting for max INT 13 drives from 
24 (the default) to 15.  From then on, the SuperMicro enclosure worked fine, 
even with all 45 drives, and no kernel hangs.

I suspect that the BIOS setting would have worked with 1 MD1000 enclosure, but 
I never tested the MD1000s, after I had the SuperMicro enclosure running.

I'm not sure if the kernal hang with max int13=24 was a hardware problem, or a 
Solaris bug.
  - Rob

 I have 15x SAS drives in a Dell MD1000 enclosure,
 attached to an LSI 9200-16e.  This has been working
 well.  The system is boothing off of internal drives,
 on a Dell SAS 6ir.
 
 I just tried to add a second storage enclosure, with
 15 more SAS drives, and I got a lockup during Loading
 Kernel.  I got the same results, whether I daisy
 chained the enclosures, or plugged them both directly
 into the LSI 9200.  When I removed the second
 enclosure, it booted up fine.
 
 I also have an LSI MegaRAID 9280-8e I could use, but
 I don't know if there is a way to pass the drives
 through, without creating RAID0 virtual drives for
 each drive, which would complicate replacing disks.
 The 9280 boots up fine, and the systems can see new
  virtual drives.
 
 Any suggestions?  Is there some sort of boot
 procedure, in order to get the system to recognize
 the second enclosure without locking up?  Is there a
 special way to configure one of these LSI boards?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] problem adding second MD1000 enclosure to LSI 9200-16e

2010-11-21 Thread Rob Cohen
I have 15x SAS drives in a Dell MD1000 enclosure, attached to an LSI 9200-16e.  
This has been working well.  The system is boothing off of internal drives, on 
a Dell SAS 6ir.

I just tried to add a second storage enclosure, with 15 more SAS drives, and I 
got a lockup during Loading Kernel.  I got the same results, whether I daisy 
chained the enclosures, or plugged them both directly into the LSI 9200.  When 
I removed the second enclosure, it booted up fine.

I also have an LSI MegaRAID 9280-8e I could use, but I don't know if there is a 
way to pass the drives through, without creating RAID0 virtual drives for each 
drive, which would complicate replacing disks.  The 9280 boots up fine, and the 
systems can see new virtual drives.

Any suggestions?  Is there some sort of boot procedure, in order to get the 
system to recognize the second enclosure without locking up?  Is there a 
special way to configure one of these LSI boards?

Thanks,
   Rob
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] problem adding second MD1000 enclosure to LSI 9200-16e

2010-11-21 Thread Markus Kovero

 Any suggestions?  Is there some sort of boot procedure, in order to get the 
 system to recognize the second enclosure without locking up?  Is there a 
 special way to  configure one of these LSI boards?


It should just work, make sure you connect it right way and both JBODs are not 
in split mode (which does not allow daisy chaining).

Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] problem adding second MD1000 enclosure to LSI 9200-16e

2010-11-21 Thread Rob Cohen
Markus,
I'm pretty sure that I have the MD1000 plugged in properly, especially since 
the same connection works on the 9280 and Perc 6/e.  It's not in split mode.

Thanks for the suggestion, though.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss