Re: Frequent hickups on the networking layer

2015-05-09 Thread Mark Schouten
Hi,

Yes, it did. I see no mbuf errors anymore, no Ethernet errors. Ctld does not 
crash anymore, it kept running after lowering the mtu to 1500.

I am using vlans and the weirdest thing when lowering the mtu was that 
everything went crazy when I only lowered the mtu for the vlan interface. Ctld 
would not start completely, pings started taking several hundred milliseconds. 
It just wouldn't work anymore. Only when I lowered all interfaces to 1500, 
stuff was ok and has been since.

I do see a lot of jumbo page allocations during backups at night, which might 
be nfsd or ctld, but it's not causing any issues.

I've learned, for now: FreeBSD and jumbo frames is a no-go. 

Hope it helps for you too.


Regards,

-- 
Mark Schouten
Tuxis Internet Engineering
m...@tuxis.nl / 0318 200208

> On 10 May 2015, at 02:17, "Christopher Forgeron"  wrote:
> 
> Mark, did switching to a MTU of 1500 ever help?
> 
> I'm currently reliving a problem with this - I'm down to a MTU of 4000, but I 
> still see jumbo pages being allocated - I believe it's my iSCSI setup (using 
> 4k block size, which means the packet is bigger than 4k), but I'm not sure 
> where it's all coming from yet.
> 
> I'm on 10.1 RELEASE fyi. 
> 
> I'm going to patch my network devs to not use MJUM9BYTES and see if that has 
> an effect.
> 
> For me, the problem all started again once I really started putting storage 
> load on the FreeBSD machines. At times, I'm seeing 7 Gbits on the 10 Gbit 
> adapters. 
> 
> Oh, and there are gremlins in the new ctld / iscsi as well. I'll get into 
> that later, but if a heavily loaded iscsi target goes down, when it reboots, 
> the reconnect storm from all the iscsi machines kernel panics the FreeBSD 
> iscsi target host.  My machine looped through three boot-start-panic loops 
> before I caught it and put it into single-user mode. Starting ctld manually 
> seems to make everything okay. 
> 
> 
> 
> 
>> On Wed, May 6, 2015 at 12:03 PM, Mark Schouten  wrote:
>> Hi,
>> 
>> On 04/29/2015 04:06 PM, Garrett Wollman wrote:
>> 
>>> If you're using one of the drivers that has this problem, then yes,
>>> keeping your layer-2 MTU/MRU below 4096 will probably cause it to use
>>> 4k (page-sized) clusters instead, which are perfectly safe.
>>> 
>>> As a side note, at least on the hardware I have to support, Infiniband
>>> is limited to 4k MTU -- so I have one "jumbo" network with 4k frames
>>> (that's bridged to IB) and one with 9k frames (that everything else
>>> uses).
>> 
>> So I was thinking, a customer of mine runs mostly the same setup, and has no 
>> issues at all. The only difference, MTU of 1500 vs MTU of 9000.
>> 
>> I also created a graph in munin, graphing the number of mbuf_jumbo requests 
>> and failures. I find that when lots of writes occur to the iscsi-layer, the 
>> number of failed requests grow, and so so the number of errors on the 
>> ethernet interface. See attached images. My customer is also not suffering 
>> from crashing ctld-daemons, which crashes every other minute in my setup.
>> 
>> So tonight I'm going to switch to an MTU of 1500, I'll let you know if that 
>> helped.
>> 
>> 
>> Regards,
>> 
>> Mark Schouten
>> 
>> 
>> ___
>> freebsd-net@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-net
>> To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
> 
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Frequent hickups on the networking layer

2015-05-09 Thread Christopher Forgeron
Mark, did switching to a MTU of 1500 ever help?

I'm currently reliving a problem with this - I'm down to a MTU of 4000, but
I still see jumbo pages being allocated - I believe it's my iSCSI setup
(using 4k block size, which means the packet is bigger than 4k), but I'm
not sure where it's all coming from yet.

I'm on 10.1 RELEASE fyi.

I'm going to patch my network devs to not use MJUM9BYTES and see if that
has an effect.

For me, the problem all started again once I really started putting storage
load on the FreeBSD machines. At times, I'm seeing 7 Gbits on the 10 Gbit
adapters.

Oh, and there are gremlins in the new ctld / iscsi as well. I'll get into
that later, but if a heavily loaded iscsi target goes down, when it
reboots, the reconnect storm from all the iscsi machines kernel panics the
FreeBSD iscsi target host.  My machine looped through three
boot-start-panic loops before I caught it and put it into single-user mode.
Starting ctld manually seems to make everything okay.




On Wed, May 6, 2015 at 12:03 PM, Mark Schouten  wrote:

> Hi,
>
> On 04/29/2015 04:06 PM, Garrett Wollman wrote:
>
>  If you're using one of the drivers that has this problem, then yes,
>> keeping your layer-2 MTU/MRU below 4096 will probably cause it to use
>> 4k (page-sized) clusters instead, which are perfectly safe.
>>
>> As a side note, at least on the hardware I have to support, Infiniband
>> is limited to 4k MTU -- so I have one "jumbo" network with 4k frames
>> (that's bridged to IB) and one with 9k frames (that everything else
>> uses).
>>
>
> So I was thinking, a customer of mine runs mostly the same setup, and has
> no issues at all. The only difference, MTU of 1500 vs MTU of 9000.
>
> I also created a graph in munin, graphing the number of mbuf_jumbo
> requests and failures. I find that when lots of writes occur to the
> iscsi-layer, the number of failed requests grow, and so so the number of
> errors on the ethernet interface. See attached images. My customer is also
> not suffering from crashing ctld-daemons, which crashes every other minute
> in my setup.
>
> So tonight I'm going to switch to an MTU of 1500, I'll let you know if
> that helped.
>
>
> Regards,
>
> Mark Schouten
>
>
> ___
> freebsd-net@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
>
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Frequent hickups on the networking layer

2015-05-06 Thread Mark Schouten

Hi,

On 04/29/2015 04:06 PM, Garrett Wollman wrote:


If you're using one of the drivers that has this problem, then yes,
keeping your layer-2 MTU/MRU below 4096 will probably cause it to use
4k (page-sized) clusters instead, which are perfectly safe.

As a side note, at least on the hardware I have to support, Infiniband
is limited to 4k MTU -- so I have one "jumbo" network with 4k frames
(that's bridged to IB) and one with 9k frames (that everything else
uses).


So I was thinking, a customer of mine runs mostly the same setup, and has no 
issues at all. The only difference, MTU of 1500 vs MTU of 9000.

I also created a graph in munin, graphing the number of mbuf_jumbo requests and 
failures. I find that when lots of writes occur to the iscsi-layer, the number 
of failed requests grow, and so so the number of errors on the ethernet 
interface. See attached images. My customer is also not suffering from crashing 
ctld-daemons, which crashes every other minute in my setup.

So tonight I'm going to switch to an MTU of 1500, I'll let you know if that 
helped.


Regards,

Mark Schouten

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Frequent hickups on the networking layer

2015-04-29 Thread Garrett Wollman
< said:

> I'm not really (or really not) comfortable with hacking and recompiling 
> stuff. I'd rather not change anything in the kernel. So would it help in 
> my case to lower my MTU from 9000 to 4000? If I understand correctly, 
> this would need to allocate chunks of 4k, which is far more logical from 
> a memory point of view?

If you're using one of the drivers that has this problem, then yes,
keeping your layer-2 MTU/MRU below 4096 will probably cause it to use
4k (page-sized) clusters instead, which are perfectly safe.

As a side note, at least on the hardware I have to support, Infiniband
is limited to 4k MTU -- so I have one "jumbo" network with 4k frames
(that's bridged to IB) and one with 9k frames (that everything else
uses).

-GAWollman
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Frequent hickups on the networking layer

2015-04-29 Thread Garrett Wollman
< said:

> - as you said, like ~ 64k), and allocate that way. That way there's no
> fragmentation to worry about - everything's just using a custom slab
> allocator for these large allocation sizes.

> It's kind of tempting to suggest freebsd support such a thing, as I
> can see increasing requirements for specialised applications that want
> this.

I think this would be an Extremely Good Thing if someone has the
cycles to implement it, and teach some of the popular network
interfaces to use it.

-GAWollman

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Frequent hickups on the networking layer

2015-04-29 Thread Rick Macklem
Paul Thornton wrote:
> Hi,
> 
> On 28/04/2015 22:06, Rick Macklem wrote:
> > ... If your
> > net device driver is one that allocates 9K jumbo mbufs for receive
> > instead of using a list of smaller mbuf clusters, I'd guess this is
> > what is biting you.
> 
> Apologies for the thread drift, but is there a list anywhere of what
> drivers might have this issue?
> 
If you:
# cd /usr/src/sys/dev
# find . -name "*.c" -exec fgrep -l MJUM9BYTES {} \;
you will get a hint w.r.t. which ones might have a problem. Then I
think you'll have to dig into the driver sources to see what it is
actually doing. (As you've seen on the thread Chelsio sounds like
it handles things and allows you to disable use of 9K via a tunable.)

rick
ps: Long ago (1970s) a CS prof. I had said "the sources are the only
real documentation". I think he was correct.

> I've certainly seen performance decrease in the past between two
> machines with igb interfaces when the MTU was raised to use 9k
> frames.
> 
> Paul.
> ___
> freebsd-net@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to
> "freebsd-net-unsubscr...@freebsd.org"
> 
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Frequent hickups on the networking layer

2015-04-29 Thread Paul Thornton

Hi,

On 28/04/2015 22:06, Rick Macklem wrote:

... If your
net device driver is one that allocates 9K jumbo mbufs for receive
instead of using a list of smaller mbuf clusters, I'd guess this is
what is biting you.


Apologies for the thread drift, but is there a list anywhere of what 
drivers might have this issue?


I've certainly seen performance decrease in the past between two 
machines with igb interfaces when the MTU was raised to use 9k frames.


Paul.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Frequent hickups on the networking layer

2015-04-29 Thread Mark Schouten

Hi,

On 04/28/2015 11:06 PM, Rick Macklem wrote:

There have been email list threads discussing how allocating 9K jumbo
mbufs will fragment the KVM (kernel virtual memory) used for mbuf
cluster allocation and cause grief. If your
net device driver is one that allocates 9K jumbo mbufs for receive
instead of using a list of smaller mbuf clusters, I'd guess this is
what is biting you.


I'm not really (or really not) comfortable with hacking and recompiling 
stuff. I'd rather not change anything in the kernel. So would it help in 
my case to lower my MTU from 9000 to 4000? If I understand correctly, 
this would need to allocate chunks of 4k, which is far more logical from 
a memory point of view?



Mark

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Frequent hickups on the networking layer

2015-04-28 Thread Adrian Chadd
I've spoken to more than one company about this stuff and their
answers are all the same:

"we ignore the freebsd allocator, allocate a very large chunk of
memory at boot, tell the VM it plainly just doesn't exist, and abuse
it via the direct map."

That gets around a lot of things, including the "oh how can we get 9k
allocations if we can't find contiguous memory/KVA/either" problem -
you just treat it as an array of 9k pages (or I'm guessing much larger
- as you said, like ~ 64k), and allocate that way. That way there's no
fragmentation to worry about - everything's just using a custom slab
allocator for these large allocation sizes.

It's kind of tempting to suggest freebsd support such a thing, as I
can see increasing requirements for specialised applications that want
this. One of the things that makes netmap so nice is it 100% avoids
the allocators in the hot path - it grabs a big chunk of memory and
allocates slots out of that via a bitmap and index values.




-adrian
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Frequent hickups on the networking layer

2015-04-28 Thread John-Mark Gurney
Navdeep Parhar wrote this message on Tue, Apr 28, 2015 at 22:16 -0700:
> On Wed, Apr 29, 2015 at 01:08:00AM -0400, Garrett Wollman wrote:
> > < >  said:
> ...
> > > As far as I know (just from email discussion, never used them myself),
> > > you can either stop using jumbo packets or switch to a different net
> > > interface that doesn't allocate 9K jumbo mbufs (doing the receives of
> > > jumbo packets into a list of smaller mbuf clusters).
> > 
> > Or just hack the driver to not use them.  For the Intel drivers this
> > is easy, and at least for the hardware I have there's no benefit to
> > using 9k clusters over 4k; for Chelsio it's quite a bit harder.
> 
> Quite a bit harder, and entirely unnecessary these days.  Recent
> versions of the Chelsio driver will fall back to 4K clusters
> automatically (and on the fly) if the system is short of 9K clusters.
> There are even tunables that will let you set 4K as the only cluster
> size that the driver should allocate.

Can we get this to be the default? and included in more drivers too?

-- 
  John-Mark Gurney  Voice: +1 415 225 5579

 "All that I will do, has been done, All that I have, has not."
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Frequent hickups on the networking layer

2015-04-28 Thread Navdeep Parhar
On Wed, Apr 29, 2015 at 01:08:00AM -0400, Garrett Wollman wrote:
> <  said:
...
> > As far as I know (just from email discussion, never used them myself),
> > you can either stop using jumbo packets or switch to a different net
> > interface that doesn't allocate 9K jumbo mbufs (doing the receives of
> > jumbo packets into a list of smaller mbuf clusters).
> 
> Or just hack the driver to not use them.  For the Intel drivers this
> is easy, and at least for the hardware I have there's no benefit to
> using 9k clusters over 4k; for Chelsio it's quite a bit harder.

Quite a bit harder, and entirely unnecessary these days.  Recent
versions of the Chelsio driver will fall back to 4K clusters
automatically (and on the fly) if the system is short of 9K clusters.
There are even tunables that will let you set 4K as the only cluster
size that the driver should allocate.

Regards,
Navdeep
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Frequent hickups on the networking layer

2015-04-28 Thread Garrett Wollman
< 
said:

> There have been email list threads discussing how allocating 9K jumbo
> mbufs will fragment the KVM (kernel virtual memory) used for mbuf
> cluster allocation and cause grief.

The problem is not KVA fragmentation -- the clusters come from a
separate map which should prevent that -- it's that clusters have to
be physically contiguous, and an active machine is going to have
trouble with that.  The fact that 9k is a goofy size (two pages plus a
little bit) doesn't help matters.

The other side, as Neel and others have pointed out, is that it's
beneficial for the hardware to have a big chunk of physically
contiguous memory to dump packets into, especially with various kinds
of receive-side offloading.

I see two solutions to this, but don't have the time or resources (or,
frankly, the need) to implement them (and both are probably required
for different situations):

1) Reserve a big chunk of physical memory early on for big clusters.
How much this needs to be will depend on the application and the
particular network interface hardware, but you should be thinking in
terms of megabytes or (on a big server) gigabytes.  Big enough to be
mapped as superpages on hardware where that's beneficial.  If you have
aggressive LRO, "big clusters" might be 64k or larger in size.

2) Use the IOMMU -- if it's available, which it won't be when running
under a hypervisor that's already using it for passthrough -- to
obviate the need for physically contiguous pages; then the problem
reduces to KVA fragmentation, which is easier to avoid in the
allocator.

> As far as I know (just from email discussion, never used them myself),
> you can either stop using jumbo packets or switch to a different net
> interface that doesn't allocate 9K jumbo mbufs (doing the receives of
> jumbo packets into a list of smaller mbuf clusters).

Or just hack the driver to not use them.  For the Intel drivers this
is easy, and at least for the hardware I have there's no benefit to
using 9k clusters over 4k; for Chelsio it's quite a bit harder.

-GAWollman

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Frequent hickups on the networking layer

2015-04-28 Thread Rick Macklem
Mark Schouten wrote:
> Hi,
> 
> 
> I've got a FreeBSD 10.1-RELEASE box running with iscsi on top of ZFS.
> I've had some major issues with it where it would stop processing
> traffic for a minute or two, but that's 'fixed' by disabling TSO. I
> do have frequent iscsi errors, which are luckily fixed on the iscsi
> layer, but they do cause an occasional errormessage on both the
> iscsi client and server. Also, I see input errors on the FreeBSD
> server, but I'm unable to find out what those are. I do see a
> relation between iscsi-errormessages and the number of ethernet
> inputerrors on the server.
> 
> 
> I saw this message [1] which made me have a look at `vmstat -z`, and
> that shows me the following:
> 
> 
> vmstat -z | head -n 1; vmstat -z | sort -k 6 -t , | tail -10 ITEM
>   SIZE  LIMIT USED FREE  REQ FAIL SLEEP
> zio_data_buf_94208:   94208,  0, 162,   5,  135632,   0,
>   0 zio_data_buf_98304:   98304,  0, 118,   9,  101606,
>   0,   0 zio_link_cache:  48,  0,   6,
>   30870,24853549414,   0,   0 8 Bucket:64,  0,
> 145,2831,148672720,  11,   0 32 Bucket:  256,
>  0, 859, 731,231513474,  52,   0 mbuf_jumbo_9k:
> 9216, 604528,7230,2002,11764806459,108298123,   0 64
> Bucket:  512,  0, 808,
> 352,147120342,16375582,   0 256 Bucket:2048,  0,
> 500,  50,307051808,189685088,   0 vmem btag:
>   56,  0, 1671605, 1291509,198933250,36431,   0 128
> Bucket:1024,  0, 410, 106,65267164,772374,
>   0
> 
> 
> I am using jumboframes. Could it be that the inputerrors AND my
> frequent hickups come from all those failures to allocate 9k jumbo
> mbufs?
There have been email list threads discussing how allocating 9K jumbo
mbufs will fragment the KVM (kernel virtual memory) used for mbuf
cluster allocation and cause grief. If your
net device driver is one that allocates 9K jumbo mbufs for receive
instead of using a list of smaller mbuf clusters, I'd guess this is
what is biting you.
As far as I know (just from email discussion, never used them myself),
you can either stop using jumbo packets or switch to a different net
interface that doesn't allocate 9K jumbo mbufs (doing the receives of
jumbo packets into a list of smaller mbuf clusters).

I remember Garrett Wollman arguing that 9K mbuf clusters shouldn't
ever be used. I've cc'd him, in case he wants to comment.

I don't know how to increase the KVM that the allocator can use for
9K mbuf clusters nor do I know if that can be used as a work around.

rick

> And can I increase the in [1] mentioned sysctls at will?
> 
> 
> Thanks
> 
> 
> 
> 
> 
> 
> [1]:
> https://lists.freebsd.org/pipermail/freebsd-questions/2013-August/252827.html
> 
> 
> Met vriendelijke groeten,
> 
> --
> Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
> Mark Schouten  | Tuxis Internet Engineering
> KvK: 61527076 | http://www.tuxis.nl/
> T: 0318 200208 | i...@tuxis.nl
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Re: Frequent hickups on the networking layer

2015-04-28 Thread Mark Schouten
I've disabled TSO a while ago, after my networking stopped working completely, 
with `ifconfig em0 -tso`.


em0: flags=8843 metric 0 mtu 9000
 
options=4209b
 ether 00:25:90:dc:0f:a2
 nd6 options=29
 media: Ethernet autoselect (1000baseT )
 status: active



nfsv212: flags=8843 metric 0 mtu 1500
 options=3
 ether 00:25:90:dc:0f:a2
 inet6 fe80::225:90ff:fedc:fa2%nfsv212 prefixlen 64 scopeid 0x6 
 inet6 fd::212:31:3:111:fffe prefixlen 64 
 nd6 options=21
 media: Ethernet autoselect (1000baseT )
 status: active
 vlan: 212 parent interface: em0



nfsv308: flags=8843 metric 0 mtu 9000
 options=3
 ether 00:25:90:dc:0f:a2
 inet 10.38.0.253 netmask 0xff00 broadcast 10.38.0.255 
 inet6 fe80::225:90ff:fedc:fa2%nfsv308 prefixlen 64 scopeid 0x8 
 inet6 fd:308::30 prefixlen 64 
 nd6 options=21
 media: Ethernet autoselect (1000baseT )
 status: active
 vlan: 308 parent interface: em0




Or should I disable VLAN_HWTSO as well. If so, how?


Met vriendelijke groeten,

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl



 Van:   Chris Forgeron  
 Aan:   Mark Schouten , "freebsd-net@FreeBSD.org" 
 
 Verzonden:   28-4-2015 13:58 
 Onderwerp:   RE: Frequent hickups on the networking layer 

What network care are you using? 
 
There have been a few reports of issues with TSO, if you check the list. I had 
some myself a while ago, but are now resolved thanks to a few helpful folk 
here. 
 
You could increase your mbufs, but if the problem is a leak/error in the stack, 
then you're just delaying the behaviour.  
 
-Original Message- 
From: owner-freebsd-...@freebsd.org [mailto:owner-freebsd-...@freebsd.org] On 
Behalf Of Mark Schouten 
Sent: Tuesday, April 28, 2015 5:48 AM 
To: freebsd-net@FreeBSD.org 
Subject: Frequent hickups on the networking layer 
 
Hi, 
 
 
I've got a FreeBSD 10.1-RELEASE box running with iscsi on top of ZFS. I've had 
some major issues with it where it would stop processing traffic for a minute 
or two, but that's 'fixed' by disabling TSO. I do have frequent iscsi errors, 
which are luckily fixed on the iscsi layer, but they do cause an occasional 
errormessage on both the iscsi client and server. Also, I see input errors on 
the FreeBSD server, but I'm unable to find out what those are. I do see a 
relation between iscsi-errormessages and the number of ethernet inputerrors on 
the server. 
 
 
I saw this message [1] which made me have a look at `vmstat -z`, and that shows 
me the following: 
 
 
vmstat -z | head -n 1; vmstat -z | sort -k 6 -t , | tail -10 ITEM               
    SIZE  LIMIT     USED     FREE      REQ FAIL SLEEP zio_data_buf_94208:   
94208,      0,     162,       5,  135632,   0,   0 zio_data_buf_98304:   98304, 
     0,     118,       9,  101606,   0,   0 zio_link_cache:          48,      
0,       6,   30870,24853549414,   0,   0 8 Bucket:                64,      0,  
   145,    2831,148672720,  11,   0 32 Bucket:              256,      0,     
859,     731,231513474,  52,   0 mbuf_jumbo_9k:         9216, 604528,    7230,  
  2002,11764806459,108298123,   0 64 Bucket:              512,      0,     808, 
    352,147120342,16375582,   0 256 Bucket:            2048,      0,     500,   
   50,307051808,189685088,   0 vmem btag:               56,      0, 1671605, 
1291509,198933250,36431,   0 128 Bucket:            1024,      0,     410,     
106,65267164,772374,   0  
 
 
I am using jumboframes. Could it be that the inputerrors AND my frequent 
hickups come from all those failures to allocate 9k jumbo mbufs? And can I 
increase the in [1] mentioned sysctls at will? 
 
 
Thanks 
 
 
 
 
 
 
[1]: 
https://lists.freebsd.org/pipermail/freebsd-questions/2013-August/252827.html 
 
 
Met vriendelijke groeten, 
 
--  
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/ 
Mark Schouten  | Tuxis Internet Engineering 
KvK: 61527076 | http://www.tuxis.nl/ 
T: 0318 200208 | i...@tuxis.nl 




smime.p7s
Description: Electronic Signature S/MIME


Frequent hickups on the networking layer

2015-04-28 Thread Mark Schouten
Hi,


I've got a FreeBSD 10.1-RELEASE box running with iscsi on top of ZFS. I've had 
some major issues with it where it would stop processing traffic for a minute 
or two, but that's 'fixed' by disabling TSO. I do have frequent iscsi errors, 
which are luckily fixed on the iscsi layer, but they do cause an occasional 
errormessage on both the iscsi client and server. Also, I see input errors on 
the FreeBSD server, but I'm unable to find out what those are. I do see a 
relation between iscsi-errormessages and the number of ethernet inputerrors on 
the server.


I saw this message [1] which made me have a look at `vmstat -z`, and that shows 
me the following:


vmstat -z | head -n 1; vmstat -z | sort -k 6 -t , | tail -10 ITEM   
SIZE  LIMIT USED FREE  REQ FAIL SLEEP zio_data_buf_94208:   
94208,  0, 162,   5,  135632,   0,   0 zio_data_buf_98304:   98304, 
 0, 118,   9,  101606,   0,   0 zio_link_cache:  48,  
0,   6,   30870,24853549414,   0,   0 8 Bucket:64,  0,  
   145,2831,148672720,  11,   0 32 Bucket:  256,  0, 
859, 731,231513474,  52,   0 mbuf_jumbo_9k: 9216, 604528,7230,  
  2002,11764806459,108298123,   0 64 Bucket:  512,  0, 808, 
352,147120342,16375582,   0 256 Bucket:2048,  0, 500,   
   50,307051808,189685088,   0 vmem btag:   56,  0, 1671605, 
1291509,198933250,36431,   0 128 Bucket:1024,  0, 410, 
106,65267164,772374,   0 


I am using jumboframes. Could it be that the inputerrors AND my frequent 
hickups come from all those failures to allocate 9k jumbo mbufs? And can I 
increase the in [1] mentioned sysctls at will?


Thanks






[1]: 
https://lists.freebsd.org/pipermail/freebsd-questions/2013-August/252827.html


Met vriendelijke groeten,

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl

smime.p7s
Description: Electronic Signature S/MIME