Hi David,

On Jun 12, 2015, at 03:44 , David Lang <da...@lang.hm> wrote:

> On Thu, 11 Jun 2015, Sebastian Moeller wrote:
> 
>> 
>> On Jun 11, 2015, at 03:05 , Alan Jenkins 
>> <alan.christopher.jenk...@gmail.com> wrote:
>> 
>>> On 10/06/15 21:54, Sebastian Moeller wrote:
>>> 
>>> One solution would be if ISPs made sure upload is 100% provisioned. Could 
>>> be cheaper than for (the higher rate) download.
>> 
>>      Not going to happen, in my opinion, as economically unfeasible for a 
>> publicly traded ISP. I would settle for that approach as long as the ISP is 
>> willing to fix its provisioning so that oversubscription episodes are 
>> reasonable rare, though.
> 
> not going to happen on any network, publicly traded or not.
> 
> The question is not "can the theoretical max of all downstream devices exceed 
> the upstream bandwidth" because that answer is going to be "yes" for every 
> network built, LAN or WAN, but rather "does the demand in practice of the 
> combined downstream devices exceed the upstream bandwidth for long enough to 
> be a problem”

        This is what I meant to convey with “that oversubscription episodes are 
reasonable rare” oversubscription here as effective or realized as compared to 
potential. I realize this is not what “over subscribed" usually means ;). I was 
aiming at the fact that ISPs nned to balance their static oversubscription in a 
way that congestion periods on the “nodes” (whatever a node is) are are enough 
that customers do not jump ship.

> 
> it's not even a matter of what percentage are they oversubscribed.
> 
> someone with 100 1.5Mb DSL lines downstream and a 50Mb upstream (30% of 
> theoretical requirements) is probably a lot worse than someone with 100 1G 
> lines downstream and a 10G upstream (10% of theoretical requirements) because 
> it's far less likely that the users of the 1G lines are actually going to 
> saturate them (let alone simultaniously for a noticable timeframe), while 
> it's very likely that the users of the 1.5M DSL lines are going to saturate 
> their lines for extended timeframes.

        I assume that ISPs take such factors into account when desining their 
access network initially; but mainly I hope that they actually track realized 
congestion periods and upgrade bottleneck equipment to keep congestion periods 
rare.

> 
> The problem shows up when either usage changes rapidly, or the network 
> operator is not keeping up with required upgrades as gradual usage changes 
> happen (including when they are prevented from upgrading because a peer won't 
> cooperate)

        Good point, I was too narrowly focussing on the access link but peering 
is another "hot potato”. Often end users try to use traceroute and friends and 
VPNs to uncontested peers to discern access-network congestion from 
“under-peering” even though at the end of the day the effects are similar. 
Thinking of it I believe that under-peering shows up more as a bandwidth loss 
as compared to the combined bandwidth loss and latency increase often seen on 
the access side (but this is conjecture as I have never seen traffic data from 
a congested peering connection).

> 
> As for the "100% provisioning" ideal, think through the theoretical aggregate 
> and realize that before you get past very many layers, you get to a bandwidh 
> requirement that it's not technically possible to provide.

        Well, I still believe that an ISP is responsible to keep its part of a 
contract by at least offering a considerable percentage of the sold access 
bandwidth into its own core network. But 100 is not going to be that 
percentage, I agree and I am happy to accept congestion as long as it is 
transiently (and I do not mean every evening it gets bad, but clears up over 
night, but rather that the ISP increases bandwidth to keep congestion periods 
rare)…

Best Regards
        Sebastian

> 
> David Lang

_______________________________________________
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel

Reply via email to