We're all really excited, particularly now that dual-10Gbe is starting to show up on low-cost server motherboards. This kinda reminds me of when the 100 MBit to 1 Gbe transition began happening (years ago it seems). I still have a first-run commercial gigabit switch in my pile. It's a huge box with fans and 4 ports on it. Count'm, *four* 1 Gbe ports in a box that needs fans. The transition ramp to 10Gbe seems to be running at around the same pace, with a long high-price commercial ramp leading into a period of huge cost and price reductions as the technology makes its way into the consumer space.
Throw in a little NVMe-based SSD storage and a low-cost box today easily has 100x the service capability verses just 10 years ago. -Matt On Fri, Mar 3, 2017 at 9:43 AM, Samuel J. Greear <[email protected]> wrote: > On Fri, Mar 3, 2017 at 12:44 AM, Sepherosa Ziehau <[email protected]> > wrote: > >> Hi all, >> >> Since so many folks are interested in the performance comparison, I >> just did one network related comparison here: >> https://leaf.dragonflybsd.org/~sephe/perf_cmp.pdf >> >> The intention is _not_ to troll, but to identify gaps, and what we can >> do to keep improving DragonFlyBSD. >> >> According to the comparison, we _do_ find one area DragonFlyBSD's >> network stack can be improved: >> Utilize all available CPUs for network protocol processing. >> >> Currently we only use power-of-2 CPUs to handle network protocol >> processing, e.g. on 24 CPUs system, only 16 CPUs will be used to >> handle network protocol processing. It is fine for workload involving >> userland applications, e.g. the HTTP server workload. But it seems >> forwarding can enjoy all available CPUs. I will work on this. >> >> Thanks, >> sephe >> >> -- >> Tomorrow Will Never Die >> > > > Sephe, > > Great work maximizing throughput while keeping the latency well bounded, > this is a pretty astounding performance profile, many thumbs up. > > > Sam >
