I'm actually about to play around (at home) with a Hyper-V cluster using 
Storage Spaces Direct (S2D). I've built two nodes with Xeon E-2146Gs, 32GB ECC 
RAM, 2x 800GB enterprise SSDs for OS/boot, then 2x 960GB NVMe disks with 
8x600GB 15k SAS disks on a 9311-8i. The 9311 is a 12gbps SAS HBA as I was 
originally going to use some 12gb SAS SSDs before changing design to NVMe. Two 
40gbps Ethernet connections (via QSFP+ DAC) between the nodes handling the S2D 
replication traffic.

Still in progress, but I'm interested in how it'll turn out.

-----Original Message-----
From: Hardware [mailto:[email protected]] On Behalf Of 
Christopher Fisk
Sent: Friday, November 15, 2019 4:38 PM
To: [email protected]
Subject: Re: [H] PCIe controller

This is why I got a motherboard with 8 onboard SATA ports & 1 NVMe port.  I run 
everything in storage spaces, hardware agnostic.

On Fri, Nov 15, 2019 at 2:18 PM Greg Sevart <[email protected]> wrote:

> Well, most of the time these cards are connected to SAS backplanes 
> with a
> SFF-8087 to SFF-8087 cable. The consumer user case of directly 
> connecting to disks (SATA at that) is less common. Not uncommon, but 
> less. :)
>
> Agreed on the potential for having a lot of extra cables, but for me, 
> it's worth it to bypass the crap cheap consumer SATA HBAs that never 
> work completely right. I don't run a lot of JBOD anymore, but the 
> systems that do are using 9211s or its successors--usually alongside 
> SAS expanders.
>
> One final point - this card can run in RAID mode or IT 
> (Initiator/Target) mode. If it comes in RAID mode, you may need to 
> switch it to IT mode. There are a number of guides out there, but this one 
> looks quite decent:
> https://nguvu.org/freenas/Convert-LSI-HBA-card-to-IT-mode/
>
>
> -----Original Message-----
> From: Hardware [mailto:[email protected]] On 
> Behalf Of Winterlight
> Sent: Friday, November 15, 2019 12:55 PM
> To: [email protected]
> Subject: Re: [H] PCIe controller
>
>
> yeah, I figured that out after I ordered it. I am surprised they don't 
> include that with the product. What I don't like is you end up with a 
> lot of unused cables in your case.. I will have to think about it.
>
>
> At 11:44 AM 11/15/2019, you wrote:
> >The card itself does not have any "SATA" ports. It uses a multi-lane 
> >mini-SAS connector - SFF-8087. You will need one or more SFF-8087 to 
> >SATA breakout cables (4 ports each) to attach disks, which is the 2nd
> link.
> >
> >-----Original Message-----
> >From: Hardware [mailto:[email protected]] On 
> >Behalf Of Winterlight
> >Sent: Friday, November 15, 2019 12:18 PM
> >To: [email protected]
> >Subject: Re: [H] PCIe controller
> >
> >
> >I ordered the controller but I just realized that the other link is a 
> >breakout cable? and not a SATA to  Esata.. is this required or just 
> >to increase from 4 to 8 ports?
> >
> >At 06:31 AM 11/15/2019, you wrote:
> > >LSI (then Avago, and now Broadcom) makes good stuff, but it's 
> > >generally server grade.
> > >
> > >If you're just connecting SATA disks, I would actually suggest an 
> > >LSI/Avago/Broadcom SAS card. Something like the 9211-8i in 
> > >Initiator/Target
> > >(IT) mode would let you directly attach 8 more disks, either SATA 
> > >or SAS. No eSATA though, so if that's a requirement, you'd need to 
> > >use your onboard ports for that. You would need SFF-8087 to SATA 
> > >forward breakout cables as well, but those are cheap.
> > >
> > >https://amzn.com/B002RL8I7M
> > >https://amzn.com/B012BPLYJC
> > >
> > >Do beware that, depending on your system, it could add time to your 
> > >POST, and add-in I/O controllers across the board are notorious for 
> > >having power-saving issues--specifically, waking from sleep/hibernate.
> > >
> > >-----Original Message-----
> > >From: Hardware [mailto:[email protected]] On 
> > >Behalf Of Winterlight
> > >Sent: Thursday, November 14, 2019 9:53 PM
> > >To: Hardware Group <[email protected]>
> > >Subject: [H] PCIe controller
> > >
> > >I am looking to buy a quality  PCIe sata controller. I am not going 
> > >to setup RAID so just 6 Gbps is fine. What I am looking for is 
> > >solid reliability and Windows 10 compatibility. at lest 4 ports. I 
> > >have tried the cheap ones like SYBA  and others and they never work 
> > >that well or very reliably. The one I have now hangs at post if 
> > >anything is plugged into the Esata port. Istarted looking for a 
> > >PROMISE but can't find anything that isn't  designed for a server. 
> > >I see a brand called LSI but I am not familiar with it ? Any sugestions?
>
>
>
>


Reply via email to