On 5 December 2017 at 06:00, Dave Taht wrote:
>>> The route table lookup also really expensive on the main cpu.
>
> To clarify the context here, I was asking specifically if the X5 mellonox card
> did routing table offlload or only switching.
>
To clarify what I know the X5
Do you think that "RTT to San Francisco" is a clear enough, predictable
enough measure that we can use it in the context of non-technical users
and obfuscating salescritters?
--dave
who vaguely watched RTT to Charlottetown PEI, Vancouver and Washington
DC in a previous life
On 04/12/17
I suggest we stop talking about throughput, which has been the mistaken idea
about networking for 30-40 years.
Almost all networking ends up being about end-to-end response time in a
multiplexed system.
Or put another way: "It's the Latency, Stupid".
I get (and have come to expect) 27
Hello,
> Scaling up to more CPUs and TCP-stream, Tariq[1] and I have showed the
> Linux kernel network stack scales to 94Gbit/s (linerate minus overhead).
> But when the drivers page-recycler fails, we hit bottlenecks in the
> page-allocator, that cause negative scaling to around 43Gbit/s.
>
> [1]
On 04/12/17 10:44 AM, Juliusz Chroboczek wrote:
In a previous life I did some work on the optimization (by remote
proxying) of the SMB protocol used by Samba [...] Eventually we said
the heck with it, and sat Samba on top of a different protocol entirely,
The audience are waiting with held
Jesper:
I have a tendency to deal with netdev by itself and never cross post
there, as the bufferbloat.net servers (primarily to combat spam)
mandate starttls and vger doesn't support it at all, thus leading to
raising davem blood pressure which I'd rather not do.
But moving on...
On Mon, Dec
> In a previous life I did some work on the optimization (by remote
> proxying) of the SMB protocol used by Samba [...] Eventually we said
> the heck with it, and sat Samba on top of a different protocol entirely,
The audience are waiting with held breath for more details.
-- Juliusz
On 03/12/17 10:44 PM, Dave Taht wrote:
More generally, the case where you have a queue containing acks, stored
up for whatever reason (congestion, media access, asymmetry), is a
chance for a middlebox or host to do something "smarter" to thin them
out.
Acks don't respond to conventional
Hi folks,
Just to inject a touch of reality into the discussion...
> On Dec 3, 2017, at 10:44 PM, bloat-requ...@lists.bufferbloat.net wrote:
>
>> I can buy 300/10 megabit/s access from my cable provider.
>
> Don't!
It would be wonderful to get a fast, symmetric link from my ISP, but here's
On Mon, 4 Dec 2017, Pedro Tumusok wrote:
Looking at chipsets coming/just arrived from the chipset vendors, I think
we will see CPE with 10G SFP+ and 802.11ax Q3/Q4 this year.
Price is of course a bit steeper than the 15USD USB DSL modem :P, but
probably fits nicely for the SMB segment.
On Mon, 4 Dec 2017, Joel Wirāmu Pauling wrote:
How to deliver a switch, when the wiring and port standard isn't
actually workable?
Not workable?
10GBase-T is out of Voltage Spec with SFP+ ; you can get copper SFP+
Yep, the "Cu SFP" was a luxury for a while. Physics is harsh mistress
Oh we have these in the Enterprise segment already. The main use case
is VNF on edge device for SDN applications right now. But even so the
range of vendors/devices is pretty limited.
On 4 December 2017 at 23:57, Pedro Tumusok wrote:
> Looking at chipsets coming/just
Looking at chipsets coming/just arrived from the chipset vendors, I think
we will see CPE with 10G SFP+ and 802.11ax Q3/Q4 this year.
Price is of course a bit steeper than the 15USD USB DSL modem :P, but
probably fits nicely for the SMB segment.
Pedro
On Mon, Dec 4, 2017 at 11:47 AM, Joel Wirāmu
On Sun, 03 Dec 2017 20:19:33 -0800 Dave Taht wrote:
> Changing the topic, adding bloat.
Adding netdev, and also adjust the topic to be a rant on that the Linux
kernel network stack is actually damn fast, and if you need something
faster then XDP can solved your needs...
> Joel
Bingo; that's definitely step one - gateways capable of 10gbit
becoming the norm.
On 4 December 2017 at 23:43, Pedro Tumusok wrote:
> For in home or even SMB, I doubt that 10G to the user PC is the main use
> case.
> Its having the uplink capable of support of more
For in home or even SMB, I doubt that 10G to the user PC is the main use
case.
Its having the uplink capable of support of more than1G, that 1G does not
necessarily need to be generated by only one host on the LAN.
Pedro
On Mon, Dec 4, 2017 at 11:27 AM, Joel Wirāmu Pauling
How to deliver a switch, when the wiring and port standard isn't
actually workable?
10GBase-T is out of Voltage Spec with SFP+ ; you can get copper SFP+
but they are out of spec... 10GbaseT doesn't really work over Cat5e
more than a couple of meters (if you are lucky) and even Cat6 is only
rated
On Mon, 4 Dec 2017, Joel Wirāmu Pauling wrote:
I'm not going to pretend that 1Gig isn't enough for most people. But I
refuse to believe it's the networks equivalent of a 10A power (20A
depending on where you live in the world) AC residential phase
distribution circuit.
That's a good analogy.
I'm not going to pretend that 1Gig isn't enough for most people. But I
refuse to believe it's the networks equivalent of a 10A power (20A
depending on where you live in the world) AC residential phase
distribution circuit.
This isn't a question about what people need, it's more about what the
On Sun, 3 Dec 2017, Dave Taht wrote:
What Jesper's been working on for ages has been to try and get linux's
PPS up for small packets, which last I heard was hovering at about
4Gbits.
You might want to look into what the VPP (https://fd.io/) peeps are doing.
They can at least forward packets
20 matches
Mail list logo