Erik Ableson wrote:
The problem I had was with the single raid 0 volumes (miswrote RAID 1 on the original message)

This is not a straight to disk connection and you'll have problems if you ever need to move disks around or move them to another controller.

Would you mind explaining exactly what issues or problems you had ? I
have moved disks around several controllers without problems. You must
remember however to create the RAID 0 lun throught LSI's megaraid CLI
tool and / or to clear any foreign config before the controller will
expose the disk(s) to the OS.

The only real problem that I can think of is that you cannot use the
autoreplace functionality of recent ZFS versions with these controllers.

I agree that the MD1000 with ZFS is a rocking, inexpensive setup (we have several!) but I'd recommend using a SAS card with a true JBOD mode for maximum flexibility and portability. If I remember correctly, I think we're using the Adaptec 3085. I've pulled 465MB/s write and 1GB/s read off the MD1000 filled with SATA drives.

Cordialement,

Erik Ableson

+33.6.80.83.58.28
Envoyé depuis mon iPhone

On 23 juin 2009, at 21:18, Henrik Johansen <hen...@scannet.dk> wrote:

Kyle McDonald wrote:
Erik Ableson wrote:

Just a side note on the PERC labelled cards: they don't have a JBOD mode so you _have_ to use hardware RAID. This may or may not be an issue in your configuration but it does mean that moving disks between controllers is no longer possible. The only way to do a pseudo JBOD is to create broken RAID 1 volumes which is not ideal.

It won't even let you make single drive RAID 0 LUNs? That's a shame.

We currently have 90+ disks that are created as single drive RAID 0 LUNs
on several PERC 6/E (LSI 1078E chipset) controllers and used by ZFS.

I can assure you that they work without any problems and perform very
well indeed.

In fact, the combination of PERC 6/E and MD1000 disk arrays has worked
so well for us that we are going to double the number of disks during
this fall.

The lack of portability is disappointing. The trade-off though is battery backed cache if the card supports it.

-Kyle


Cordialement,

Erik Ableson

+33.6.80.83.58.28
Envoyé depuis mon iPhone

On 23 juin 2009, at 04:33, "Eric D. Mudama" <edmud...@bounceswoosh.org > wrote:

> On Mon, Jun 22 at 15:46, Miles Nordin wrote:
>>>>>>> "edm" == Eric D Mudama <edmud...@bounceswoosh.org> writes:
>>
>>  edm> We bought a Dell T610 as a fileserver, and it comes with an
>>  edm> LSI 1068E based board (PERC6/i SAS).
>>
>> which driver attaches to it?
>>
>> pciids.sourceforge.net says this is a 1078 board, not a 1068 board.
>>
>> please, be careful. There's too much confusion about these cards.
>
> Sorry, that may have been confusing.  We have the cheapest storage
> option on the T610, with no onboard cache. I guess it's called the
> "Dell SAS6i/R" while they reserve the PERC name for the ones with
> cache. I had understood that they were basically identical except for
> the cache, but maybe not.
>
> Anyway, this adapter has worked great for us so far.
>
>
> snippet of prtconf -D:
>
>
> i86pc (driver name: rootnex)
>    pci, instance #0 (driver name: npe)
>        pci8086,3411, instance #6 (driver name: pcie_pci)
>            pci1028,1f10, instance #0 (driver name: mpt)
>                sd, instance #1 (driver name: sd)
>                sd, instance #6 (driver name: sd)
>                sd, instance #7 (driver name: sd)
>                sd, instance #2 (driver name: sd)
>                sd, instance #4 (driver name: sd)
>                sd, instance #5 (driver name: sd)
>
>
> For this board the mpt driver is being used, and here's the prtconf
> -pv info:
>
>
>  Node 0x00001f
> assigned-addresses: > 81020010.00000000.0000fc00.00000000.00000100.83020014.00000000.
> df2ec000.00000000.00004000.8302001c.
> 00000000.df2f0000.00000000.00010000
> reg: > 00020000.00000000.00000000.00000000.00000000.01020010.00000000.00000000.00000000.00000100.03020014.00000000.00000000.00000000.00004000.0302001c.
> 00000000.00000000.00000000.00010000
> compatible: 'pciex1000,58.1028.1f10.8' + 'pciex1000,58.1028.1f10' > + 'pciex1000,58.8' + 'pciex1000,58' + 'pciexclass,010000' + > 'pciexclass,0100' + 'pci1000,58.1028.1f10.8' + > 'pci1000,58.1028.1f10' + 'pci1028,1f10' + 'pci1000,58.8' + > 'pci1000,58' + 'pciclass, 010000' + 'pciclass,0100'
>    model:  'SCSI bus controller'
>    power-consumption:  00000001.00000001
>    devsel-speed:  00000000
>    interrupts:  00000001
>    subsystem-vendor-id:  00001028
>    subsystem-id:  00001f10
>    unit-address:  '0'
>    class-code:  00010000
>    revision-id:  00000008
>    vendor-id:  00001000
>    device-id:  00000058
>    pcie-capid-pointer:  00000068
>    pcie-capid-reg:  00000001
>    name:  'pci1028,1f10'
>
>
> --eric
>
>
> --
> Eric D. Mudama
> edmud...@mail.bounceswoosh.org
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet

--
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to