Joe Greco wrote on 29/03/2020 23:14:
Flood often works fine until you attempt to scale it. Then it breaks,
just like Bj??rn admitted. Flooding is inherently problematic at scale.
For... what, exactly? General Usenet?
yes, this is what we're talking about. It couldn't scale to general
usenet levels.
The scale issue wasn't flooding, it was bandwidth and storage.
the bandwidth and storage problems happened because of flooding. Short
of cutting off content, there's no way to restrict bandwidth usage, but
cutting off content restricts the functionality of the ecosystem. You
can work around this using remote readers and manually distributing ,
but there's still a fundamental scaling issue going on here, namely that
the model of flooding all posts in all groups to all nodes has terrible
scaling design characteristics. It's terrible because it requires all
core nodes to linearly scale their individual resourcing requirements
according to the overall load of the entire system. You can manually
configure load splitting to work around some of these limitations, but
it's not possible to ignore the design problems here.
[...]
The Usenet "backbone" with binaries isn't going to be viable without a
real large capex investment and significant ongoing opex. This isn't a
failure in the technology.
We may need to agree to disagree on this then. Reasonable engineering
entails being able to build workable solutions within a feasible budget.
If you can't do this, then there's a problem with the technology at
the design level.
Usenet is a great technology for doing collaboration on low bandwidth and
lossy connections.
For small, constrained quantities of traffic, it works fine.
Nick