If anyone is interested, as Michelle Sullivan just mentioned. One problem I
found when looking for an HBA is that they are not so easy to find. Scoured
the internet for a backup HBA I came across these -
http://www.avagotech.com/products/server-storage/host-bus-adapters/#tab-12Gb1
Can only speak f
> On 01 Aug 2016, at 19:30, Michelle Sullivan wrote:
>
> There are reasons for using either…
Indeed, but my decision was to run ZFS. And getting a HBA in some
configurations can be difficult because vendors insist on using
RAID adapters. After all, that’s what most of their customers demand.
> On 01 Aug 2016, at 15:12, O. Hartmann wrote:
>
> First, thanks for responding so quickly.
>
>> - The third option is to make the driver expose the SAS devices like a HBA
>> would do, so that they are visible to the CAM layer, and disks are handled by
>> the stock “da” driver, which is the ide
On Mon, 1 Aug 2016 11:48:30 +0200
Borja Marcos wrote:
Hello.
First, thanks for responding so quickly.
> > On 01 Aug 2016, at 08:45, O. Hartmann wrote:
> >
> > On Wed, 22 Jun 2016 08:58:08 +0200
> > Borja Marcos wrote:
> >
> >> There is an option you can use (I do it all the time!) to make
> On 01 Aug 2016, at 08:45, O. Hartmann wrote:
>
> On Wed, 22 Jun 2016 08:58:08 +0200
> Borja Marcos wrote:
>
>> There is an option you can use (I do it all the time!) to make the card
>> behave as a plain HBA so that the disks are handled by the “da” driver.
>>
>> Add this to /boot/loader.c
On Wed, 22 Jun 2016 08:58:08 +0200
Borja Marcos wrote:
> > On 22 Jun 2016, at 04:08, Jason Zhang wrote:
> >
> > Mark,
> >
> > Thanks
> >
> > We have same RAID setting both on FreeBSD and CentOS including cache
> > setting. In FreeBSD, I enabled the write cache but the performance is the
> >
Bezüglich Jason Zhang's Nachricht vom 17.06.2016 09:16 (localtime):
> Hi,
>
> I am working on storage service based on FreeBSD. I look forward to a good
> result because many professional storage company use FreeBSD as its OS. But
> I am disappointed with the Bad performance. I tested the the
> On 22 Jun 2016, at 04:08, Jason Zhang wrote:
>
> Mark,
>
> Thanks
>
> We have same RAID setting both on FreeBSD and CentOS including cache setting.
> In FreeBSD, I enabled the write cache but the performance is the same.
>
> We don’t use ZFS or UFS, and test the performance on the RAW G
As a side note, we also use this controller with FreeBSD 10.1 but configured
each drive as a JBOD and then created raidz zfs pools and that was much faster
than to let the LSI do raid5.
Best
Doros
___
freebsd-performance@freebsd.org mailing list
https
Mark,
Thanks
We have same RAID setting both on FreeBSD and CentOS including cache setting.
In FreeBSD, I enabled the write cache but the performance is the same.
We don’t use ZFS or UFS, and test the performance on the RAW GEOM disk “mfidx”
exported by mfi driver. We observed the “gstat” r
On Fri, Jun 17, 2016, at 02:17, Jason Zhang wrote:
> Hi,
>
> I am working on storage service based on FreeBSD. I look forward to a
> good result because many professional storage company use FreeBSD as its
> OS. But I am disappointed with the Bad performance. I tested the the
> performance of
On 17/06/2016 3:16 PM, Jason Zhang wrote:
Hi,
I am working on storage service based on FreeBSD. I look forward to a good
result because many professional storage company use FreeBSD as its OS. But I
am disappointed with the Bad performance. I tested the the performance of LSI
MegaRAID 9260
Hi,
I am working on storage service based on FreeBSD. I look forward to a good
result because many professional storage company use FreeBSD as its OS. But I
am disappointed with the Bad performance. I tested the the performance of LSI
MegaRAID 9260-8i and had the following bad result:
1.
Hi,
I am working on storage service based on FreeBSD. I look forward to a good
result because many professional storage company use FreeBSD as its OS. But I
am disappointed with the Bad performance. I tested the the performance of LSI
MegaRAID 9260-8i and had the following bad result:
1.
14 matches
Mail list logo