Re: Frequent hickups on the networking layer

2015-04-29 Thread Mark Schouten

Hi,

On 04/28/2015 11:06 PM, Rick Macklem wrote:

There have been email list threads discussing how allocating 9K jumbo
mbufs will fragment the KVM (kernel virtual memory) used for mbuf
cluster allocation and cause grief. If your
net device driver is one that allocates 9K jumbo mbufs for receive
instead of using a list of smaller mbuf clusters, I'd guess this is
what is biting you.


I'm not really (or really not) comfortable with hacking and recompiling 
stuff. I'd rather not change anything in the kernel. So would it help in 
my case to lower my MTU from 9000 to 4000? If I understand correctly, 
this would need to allocate chunks of 4k, which is far more logical from 
a memory point of view?



Mark

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: Frequent hickups on the networking layer

2015-04-29 Thread Paul Thornton

Hi,

On 28/04/2015 22:06, Rick Macklem wrote:

... If your
net device driver is one that allocates 9K jumbo mbufs for receive
instead of using a list of smaller mbuf clusters, I'd guess this is
what is biting you.


Apologies for the thread drift, but is there a list anywhere of what 
drivers might have this issue?


I've certainly seen performance decrease in the past between two 
machines with igb interfaces when the MTU was raised to use 9k frames.


Paul.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


netmap: Does netmap support linux kernel 2.6.32 ?

2015-04-29 Thread ChenXiaodong
Hi, 

I cannot receive packets using netmap complied for linux kernel 2.6.32.xxx. I 
tried on CentOS 6.5, Redhat and Debian, both of which are based on linux kernel 
2.6.32.xxx. None of them works. Is Linux 2.6.32 too old to netmap ?


This is the test code I used . It is copied from netmap's man page. The program 
stops at the call to 'poll' and never return.

int main(int argc, char **argv)
{
(void) argc;
(void) argv;

struct nm_desc *d;
struct pollfd fds;
u_char *buf;
struct nm_pkthdr h;
d = nm_open(netmap:eth0, NULL, 0, 0);
fds.fd = NETMAP_FD(d);
fds.events = POLLIN;
for (;;) {
poll(fds, 1, -1);
while ( (buf = nm_nextpkt(d, h)) ) {
printf(packet %d\n, h.len);
printf(  dmac: %02x:%02x:%02x:%02x:%02x:%02x,
buf[0], buf[1], buf[2], buf[3], buf[4], buf[5]);
buf += 6;
printf( - smac: %02x:%02x:%02x:%02x:%02x:%02x\n,
buf[0], buf[1], buf[2], buf[3], buf[4], buf[5]);
}
}
nm_close(d);
return 0;
}

Thanks! I do appreciate your reply!

/ChenXiaodong
  
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: Frequent hickups on the networking layer

2015-04-29 Thread John-Mark Gurney
Navdeep Parhar wrote this message on Tue, Apr 28, 2015 at 22:16 -0700:
 On Wed, Apr 29, 2015 at 01:08:00AM -0400, Garrett Wollman wrote:
  On Tue, 28 Apr 2015 17:06:02 -0400 (EDT), Rick Macklem 
  rmack...@uoguelph.ca said:
 ...
   As far as I know (just from email discussion, never used them myself),
   you can either stop using jumbo packets or switch to a different net
   interface that doesn't allocate 9K jumbo mbufs (doing the receives of
   jumbo packets into a list of smaller mbuf clusters).
  
  Or just hack the driver to not use them.  For the Intel drivers this
  is easy, and at least for the hardware I have there's no benefit to
  using 9k clusters over 4k; for Chelsio it's quite a bit harder.
 
 Quite a bit harder, and entirely unnecessary these days.  Recent
 versions of the Chelsio driver will fall back to 4K clusters
 automatically (and on the fly) if the system is short of 9K clusters.
 There are even tunables that will let you set 4K as the only cluster
 size that the driver should allocate.

Can we get this to be the default? and included in more drivers too?

-- 
  John-Mark Gurney  Voice: +1 415 225 5579

 All that I will do, has been done, All that I have, has not.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: Frequent hickups on the networking layer

2015-04-29 Thread Adrian Chadd
I've spoken to more than one company about this stuff and their
answers are all the same:

we ignore the freebsd allocator, allocate a very large chunk of
memory at boot, tell the VM it plainly just doesn't exist, and abuse
it via the direct map.

That gets around a lot of things, including the oh how can we get 9k
allocations if we can't find contiguous memory/KVA/either problem -
you just treat it as an array of 9k pages (or I'm guessing much larger
- as you said, like ~ 64k), and allocate that way. That way there's no
fragmentation to worry about - everything's just using a custom slab
allocator for these large allocation sizes.

It's kind of tempting to suggest freebsd support such a thing, as I
can see increasing requirements for specialised applications that want
this. One of the things that makes netmap so nice is it 100% avoids
the allocators in the hot path - it grabs a big chunk of memory and
allocates slots out of that via a bitmap and index values.




-adrian
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: Frequent hickups on the networking layer

2015-04-29 Thread Garrett Wollman
On Tue, 28 Apr 2015 23:37:22 -0700, Adrian Chadd adr...@freebsd.org said:

 - as you said, like ~ 64k), and allocate that way. That way there's no
 fragmentation to worry about - everything's just using a custom slab
 allocator for these large allocation sizes.

 It's kind of tempting to suggest freebsd support such a thing, as I
 can see increasing requirements for specialised applications that want
 this.

I think this would be an Extremely Good Thing if someone has the
cycles to implement it, and teach some of the popular network
interfaces to use it.

-GAWollman

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: Frequent hickups on the networking layer

2015-04-29 Thread Garrett Wollman
On Wed, 29 Apr 2015 09:30:34 +0200, Mark Schouten m...@tuxis.nl said:

 I'm not really (or really not) comfortable with hacking and recompiling 
 stuff. I'd rather not change anything in the kernel. So would it help in 
 my case to lower my MTU from 9000 to 4000? If I understand correctly, 
 this would need to allocate chunks of 4k, which is far more logical from 
 a memory point of view?

If you're using one of the drivers that has this problem, then yes,
keeping your layer-2 MTU/MRU below 4096 will probably cause it to use
4k (page-sized) clusters instead, which are perfectly safe.

As a side note, at least on the hardware I have to support, Infiniband
is limited to 4k MTU -- so I have one jumbo network with 4k frames
(that's bridged to IB) and one with 9k frames (that everything else
uses).

-GAWollman
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


[Differential] [Updated] D1438: FreeBSD callout rewrite and cleanup

2015-04-29 Thread eadler (Eitan Adler)
eadler removed a reviewer: Doc Committers.

REPOSITORY
  rS FreeBSD src repository

REVISION DETAIL
  https://reviews.freebsd.org/D1438

EMAIL PREFERENCES
  https://reviews.freebsd.org/settings/panel/emailpreferences/

To: hselasky, jhb, adrian, markj, emaste, sbruno, imp, lstewart, rwatson, gnn, 
rrs, kostikbel, delphij, neel, erj
Cc: avg, jch, wblock, freebsd-net
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org