On 15-10-25 03:46 AM, Some Developer wrote:
I'm just wondering what hardware spec I'd need push 20 gigabits of network traffic on an OpenBSD server?

Short answer: It's not generally possible today, at least for your use case.

Medium answer: Contact Esdenera Networks to find out. They manage to do it somehow. I'm sure they'll be happy to make it happen for you in exchange for suitable amounts of money...


Longer answer:

Network performance research numbers have presented by gnn at various conferences over the last year or so, and they consistently showed that OpenBSD, while performing well for a single-threaded stack, fell badly behind in multi-core, and wasn't able to keep up to 10Gbps. The OpenBSD team is (currently, AFAIK) working on making the network stack multi-threaded, or at least not giant-locked, which should (eventually) dramatically increase performance scalability.

On top of that, there are substantial optimizations possible; research in the FreeBSD camp (and experience under OpenBSD as well) has shown that seemingly-similar hardware can perform radically differently. Drivers make a big difference.

You talk about storing the data - *writing* data to disk at 10Gbps (sustained) is currently in the realm of high-energy physics, with multi-million-dollar budgets for the storage arrays. A 7200rpm disk can charitably be said to write at up to 100MBytes/sec, but that's not necessarily sustained speed, so minimum 10-unit array assuming 100% ideal throughput, which doesn't actually exist in the real world. More likely you'd have to buy a large HDS array to get that kind of throughput. Plus, that's about 2.5PB (yes, PETAbytes) of data every month. Are you building this for the NSA?!?

You do realize that this means you're now trying to push *30* Gbits/sec on a single server, right? (10 in, 10 out, 10 logged) Even Netflix, who spend a ridiculous amount of time doing optimization, have only recently gotten FreeBSD servers with tons of custom code and tweaks to pass the 65Gbps-per-socket mark.

Lastly, Gbits/sec isn't the bottleneck. The bottleneck is packets-per-second. If you're pushing 10Gbps worth of 1500-byte packets, then this is possible today. (Not sure about 30Gbps.) If you're trying to push 10Gbps worth of 64-byte packets on commodity hardware, forget about this pipe dream for another few years until the fully-MP network stack is finished and optimized.

Good luck... but you might want to consider doing this on a Juniper MX series or Cisco ASR instead - those platforms can at least maybe do the tunnelling part for around $250k, then feed the output into a 10GE switch with port mirroring (~$10k), then a Network Flight Recorder or similar to actually capture that much data (~$150k).

-Adam

Reply via email to