> On 01 Aug 2016, at 19:30, Michelle Sullivan <miche...@sorbs.net> wrote:
> 
> There are reasons for using either…

Indeed, but my decision was to run ZFS. And getting a HBA in some 
configurations can be difficult because vendors insist on using 
RAID adapters. After all, that’s what most of their customers demand.

Fortunately, at least some Avago/LSI cards can work as HBAs pretty well. An 
example is the now venerable LSI2008.

> Nowadays its seems the conversations have degenerated into those like Windows 
> vs Linux vs Mac where everyone thinks their answer is the right one (just as 
> you suggested you (Borja Marcos) did with the Dell salesman), where in 
> reality each has its own advantages and disadvantages.

I know, but this is not the case. But it’s quite frustrating to try to order a 
server with a HBA rather than a RAID and receiving an answer such as
“the HBA option is not available”. That’s why people are zapping, flashing and, 
generally, torturing HBA cards rather cruelly ;)

So, in my case, it’s not about what’s better or worse. It’s just a simpler 
issue. Customer (myself) has made a decision, which can be right or wrong. 
Manufacturer fails to deliver what I need. If it was only one manufacturer, 
well, off with them, but the issue is widespread in industry. 

> Eg: I'm running 2 zfs servers on 'LSI 9260-16i's... big mistake! (the ZFS, 
> not LSI's)... one is a 'movie server' the other a 'postgresql database' 
> server...  The latter most would agree is a bad use of zfs, the die-hards 
> won't but then they don't understand database servers and how they work on 
> disk.  The former has mixed views, some argue that zfs is the only way to 
> ensure the movies will always work, personally I think of all the years 
> before zfs when my data on disk worked without failure until the disks 
> themselves failed... and RAID stopped that happening...  what suddenly 
> changed, are disks and ram suddenly not reliable at transferring data? .. 
> anyhow back to the issue there is another part with this particular hardware 
> that people just throw away…

Well, silent corruption can happen. I’ve seen it once caused by a flaky HBA and 
ZFS saved the cake. Yes. there were reliable replicas. Still, rebuilding would 
be a pain in the ass. 

> The LSI 9260-* controllers have been designed to provide on hardware RAID.  
> The caching whether using the Cachecade SSD or just oneboard ECC memory is 
> *ONLY* used when running some sort of RAID set and LVs... this is why LSI 
> recommend 'MegaCli -CfgEachDskRaid0' because it does enable caching..  A good 
> read on how to setup something similar is here: 
> https://calomel.org/megacli_lsi_commands.html (disclaimer, I haven't parsed 
> it all so the author could be clueless, but it seems to give generally good 
> advice.)  Going the way of 'JBOD' is a bad thing to do, just don't, 
> performance sucks. As for the recommended command above, can't comment 
> because currently I don't use it nor will I need to in the near future... but…

Actually it’s not a good idea to use heavy disk caching when running ZFS. Its 
reliability depends on being able to commit metadata to disk. So I don’t care 
about that caching option. Provided you have enough RAM, ZFS is very effective 
caching data itself.

> If you (O Hartmann) want to use or need to use ZFS with any OS including 
> FreeBSD don't go with the LSI 92xx series controllers, its just the wrong 
> thing to do..  Pick an HBA that is designed to give you direct access to the 
> drives not one you have to kludge and cajole.. Including LSI controllers with 
> caches that use the mfi driver, just not those that are not designed to work 
> in a non RAID mode (with or without the passthru command/mode above.)

As I said, the problem is, sometimes it’s not so easy to find the right HBA. 

> So moral of the story/choices.  Don't go with ZFS because people tell you its 
> best, because it isn't, go with ZFS if it suits your hardware and 
> application, and if ZFS suits your application, get hardware for it.

Indeed, I second this. But really, "hardware for it" covers a rather broad 
cathegory ;) ZFS can even manage to work on hardware _against_ it.






Borja.


_______________________________________________
freebsd-performance@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "freebsd-performance-unsubscr...@freebsd.org"

Reply via email to