On Sat, May 15, 2010 at 9:23 AM, Barney Cordoba
wrote:
>
>
> --- On Fri, 5/14/10, Alexander Sack wrote:
>
>> From: Alexander Sack
>> Subject: Re: Intel 10Gb
>> To: "Jack Vogel"
>> Cc: "Murat Balaban" , freebsd-...@freebsd.org,
>&g
--- On Fri, 5/14/10, Alexander Sack wrote:
> From: Alexander Sack
> Subject: Re: Intel 10Gb
> To: "Jack Vogel"
> Cc: "Murat Balaban" , freebsd-...@freebsd.org,
> freebsd-performance@freebsd.org, "Andrew Gallatin"
> Date: Friday, May 14, 2010,
On Fri, May 14, 2010 at 1:01 PM, Jack Vogel wrote:
>
>
> On Fri, May 14, 2010 at 8:18 AM, Alexander Sack wrote:
>>
>> On Fri, May 14, 2010 at 10:07 AM, Andrew Gallatin
>> wrote:
>> > Alexander Sack wrote:
>> > <...>
>> >>> Using this driver/firmware combo, we can receive minimal packets at
>> >>
o pass the FCS to the host."
> -Original Message-
> From: owner-freebsd-performa...@freebsd.org [mailto:owner-freebsd-
> performa...@freebsd.org] On Behalf Of Andrew Gallatin
> Sent: Friday, May 14, 2010 8:41 AM
> To: Alexander Sack
> Cc: Murat Balaban; freebsd-...@freeb
On Fri, May 14, 2010 at 11:41 AM, Andrew Gallatin wrote:
> Alexander Sack wrote:
>> On Fri, May 14, 2010 at 10:07 AM, Andrew Gallatin
>> wrote:
>>> Alexander Sack wrote:
>>> <...>
> Using this driver/firmware combo, we can receive minimal packets at
> line rate (14.8Mpps) to userspace. Y
On Fri, May 14, 2010 at 10:07 AM, Andrew Gallatin wrote:
> Alexander Sack wrote:
> <...>
>>> Using this driver/firmware combo, we can receive minimal packets at
>>> line rate (14.8Mpps) to userspace. You can even access this using a
>>> libpcap interface. The trick is that the fast paths are OS-
On Fri, May 14, 2010 at 8:18 AM, Alexander Sack wrote:
> On Fri, May 14, 2010 at 10:07 AM, Andrew Gallatin
> wrote:
> > Alexander Sack wrote:
> > <...>
> >>> Using this driver/firmware combo, we can receive minimal packets at
> >>> line rate (14.8Mpps) to userspace. You can even access this usi
On Tue, May 11, 2010 at 9:51 AM, Andrew Gallatin wrote:
> Murat Balaban [mu...@enderunix.org] wrote:
>>
>> Much of the FreeBSD networking stack has been made parallel in order to
>> cope with high packet rates at 10 Gig/sec operation.
>>
>> I've seen good numbers (near 10 Gig) in my tests involvin
Alexander Sack wrote:
To use DCA you need:
- A DCA driver to talk to the IOATDMA/DCA pcie device, and obtain the tag
table
- An interface that a client device (eg, NIC driver) can use to obtain
either the tag table, or at least the correct tag for the CPU
that the interrupt
Alexander Sack wrote:
> On Fri, May 14, 2010 at 10:07 AM, Andrew Gallatin
wrote:
>> Alexander Sack wrote:
>> <...>
Using this driver/firmware combo, we can receive minimal packets at
line rate (14.8Mpps) to userspace. You can even access this using a
libpcap interface. The tric
Alexander Sack wrote:
<...>
>> Using this driver/firmware combo, we can receive minimal packets at
>> line rate (14.8Mpps) to userspace. You can even access this using a
>> libpcap interface. The trick is that the fast paths are OS-bypass,
>> and don't suffer from OS overheads, like lock content
Murat Balaban [mu...@enderunix.org] wrote:
>
> Much of the FreeBSD networking stack has been made parallel in order to
> cope with high packet rates at 10 Gig/sec operation.
>
> I've seen good numbers (near 10 Gig) in my tests involving TCP/UDP
> send/receive. (latest Intel driver).
>
> As far
--- On Sun, 5/9/10, Jack Vogel wrote:
> From: Jack Vogel
> Subject: Re: Intel 10Gb
> To: "Barney Cordoba"
> Cc: "Murat Balaban" , freebsd-...@freebsd.org,
> freebsd-performance@freebsd.org, "grarpamp" , "Vincent
> Hoffman"
> Dat
On Sun, May 9, 2010 at 6:43 AM, Barney Cordoba wrote:
>
>
> --- On Sat, 5/8/10, Murat Balaban wrote:
>
> > From: Murat Balaban
> > Subject: Re: Intel 10Gb
> > To: "Vincent Hoffman"
> > Cc: freebsd-...@freebsd.org, freebsd-performance@freebsd.org,
--- On Sat, 5/8/10, Murat Balaban wrote:
> From: Murat Balaban
> Subject: Re: Intel 10Gb
> To: "Vincent Hoffman"
> Cc: freebsd-...@freebsd.org, freebsd-performance@freebsd.org, "grarpamp"
>
> Date: Saturday, May 8, 2010, 8:59 AM
>
> Much o
Much of the FreeBSD networking stack has been made parallel in order to
cope with high packet rates at 10 Gig/sec operation.
I've seen good numbers (near 10 Gig) in my tests involving TCP/UDP
send/receive. (latest Intel driver).
As far as BPF is concerned, above statement does not hold true,
si
Looks a little like
http://lists.freebsd.org/pipermail/svn-src-all/2010-May/023679.html
but for intel. cool.
Vince
On 07/05/2010 23:01, grarpamp wrote:
> Just wondering in general these days how close FreeBSD is to
> full 10Gb rates at various packet sizes from minimum ethernet
> frame to max jumb
Just wondering in general these days how close FreeBSD is to
full 10Gb rates at various packet sizes from minimum ethernet
frame to max jumbo 65k++. For things like BPF, ipfw/pf, routing,
switching, etc.
http://www.ntop.org/blog/?p=86
___
freebsd-performa
18 matches
Mail list logo