you suggested you (Borja Marcos) did with the Dell salesman), where in
> reality each has its own advantages and disadvantages.
I know, but this is not the case. But it’s quite frustrating to try to order a
server with a HBA rather than a RAID and receiving an answer such as
“the HBA op
> On 01 Aug 2016, at 15:12, O. Hartmann wrote:
>
> First, thanks for responding so quickly.
>
>> - The third option is to make the driver expose the SAS devices like a HBA
>> would do, so that they are visible to the CAM layer, and disks are handled by
>> the stock “da” driver, which is the ide
> On 01 Aug 2016, at 08:45, O. Hartmann wrote:
>
> On Wed, 22 Jun 2016 08:58:08 +0200
> Borja Marcos wrote:
>
>> There is an option you can use (I do it all the time!) to make the card
>> behave as a plain HBA so that the disks are handled by the “da” driver
> On 22 Jun 2016, at 04:08, Jason Zhang wrote:
>
> Mark,
>
> Thanks
>
> We have same RAID setting both on FreeBSD and CentOS including cache setting.
> In FreeBSD, I enabled the write cache but the performance is the same.
>
> We don’t use ZFS or UFS, and test the performance on the RAW G
> On 24/9/2014, at 17:09, David Wolfskill wrote:
>
>
>> On Tue, Sep 23, 2014 at 10:45:14AM +0200, Borja Marcos wrote:
>> ...
>> Anyway, for disk stats GEOM offers a nice API. You can get delays per GEOM
>> provider, bandwidths, etc.
>
>>
>
On Sep 23, 2014, at 11:19 AM, Stefan Parvu wrote:
>
>> Anyway, for disk stats GEOM offers a nice API. You can get delays per GEOM
>> provider, bandwidths, etc.
>
> Are you talking about C consumers or I can do that using Perl, Sh ?
>
> Is there any way to consume the metrics via Perl, for exa
On Sep 23, 2014, at 10:38 AM, Stefan Parvu wrote:
>
>> ... I rather wish I could get the same information via sysctl. (Well,
>> something seems to be available via the "opaque" kern.devstat.all
>> sysctl(8) variable, but sysctl(8) doesn't display all of it, and parsing
>> it seems as if that wo
On Sep 22, 2014, at 11:22 PM, David Wolfskill wrote:
> ... I rather wish I could get the same information via sysctl. (Well,
> something seems to be available via the "opaque" kern.devstat.all
> sysctl(8) variable, but sysctl(8) doesn't display all of it, and parsing
> it seems as if that would
Hello,
Sorry for the crossposting, but I think this is also relevant to -fs.
After many years gathering dust (although I've been using it internally) I have
updated devilator, the performance data collector for Orca.
Apart from some cleanup and some bug fixes, I am including ZFS monitoring. It
On Jan 23, 2008, at 12:44 PM, Kris Kennaway wrote:
One suggestion I have is that as more metrics are added it becomes
important for an "at a glance" overview of changes so we can monitor
for performance improvements and regressions among many workloads.
One
rted its life as an internal development, and I'm
polishing
it for public distribution. Most of the development time has been
payed for
by my employer, Sarenet, and it's being released as a contribution to
the
FreeBSD community.
Please send bugs, ideas, flames, etc to the followi
I am running some performance tests on named to see how it performs
with different configurations on FreeBSD and figured I would share
the
first results. The first tests are for serving up static data.
I added this to
http://wiki.freebsd.org/BenchmarkMatrix
According to the bind9 port mak
12 matches
Mail list logo