Matthew Toseland wrote:
> On Tuesday 06 January 2009 12:15, Florent Daigniere wrote:
>> Matthew Toseland wrote:
>>> On Wednesday 31 December 2008 14:23, Matthew Toseland wrote:
>>>> #1: 41 votes : release the 20 nodes barrier
>>>>
>>>> "most of the users nowadays have a lot of upload-bandwith available. 
> Myself 
>>>> has about 3Mbits upload, but the limit to connect to not more than 20 
> nodes 
>>>> results in about 50kb/s max. Please release the limit or use a dynamic 
>>> system 
>>>> that offers more connections if the node has a high bandwith upload limit 
>>>> (scaling). Thx"
>>>>
>>>> I'm not sure what to do about this. The original rationale for the 20 
> peers 
>>>> limit was that we didn't want to disadvantage darknet nodes too much on a 
>>>> hybrid network, since they will not often have large numbers of peers. 
>>>> Combined with experience on 0.5 suggesting that more peers is not always 
>>>> better, a security concern over over-reliance on ubernodes, and the fact 
>>> that 
>>>> we should eventually be able to improve bandwidth usage through better 
> load 
>>>> management. However, there's a limit to what we are able to achieve 
> through 
>>>> better load management, and it's a difficult problem.
>>>>
>>>> Thoughts?
>>> As people have pointed out, many people only have access to very slow 
>>> connections. Vive seems to think there is no theoretical problem with 
>>> this ... so the remaining questions:
>>> - What should the minimum number of peers be?
>>> - What should the maximum number of peers be?
>>> - How much output bandwidth should we require for every additional peer?
>>>
>>> For the first, a safe answer would be 20, since that's what we use now; 
>>> clearly it won't seriously break things. IMHO less than 1kB/sec/peer is 
>>> unreasonable, but I might be persuaded to use more than that. And we 
> probably 
>>> should avoid adding more peers until we've reached the minimum bandwidth 
> for 
>>> the lower limit. Vive suggested a limit of 50, I originally suggested 
> 40 ... 
>>> probe requests continue to show approximately 1000 live nodes at any given 
>>> time, so we don't want the upper limit to be too high; 100 would certainly 
> be 
>>> too high.
>>>
>>> One possibility then:
>>>
>>> 0-20kB/sec : 20 peers
>>> 21kB/sec : 21 peers
>>> ...
>>> 40kB/sec+ : 40 peers
>>>
>>> Arguably this is too fast; some connections have a lot more than 40kB/sec 
>>> spare upload bandwidth. Maybe it shouldn't even be linear? Or maybe we 
> should 
>>> have a lower minimum number of peers?
>>>
>>> 0-10kB/sec : 10 peers
>>> 12kB/sec : 11 peers
>>> 14kB/sec : 12 peers
>>> ...
>>> 70kB/sec : 40 peers
>>>
>> Yay, more alchemy!
> 
> More alchemical than an arbitrary 20 peers limit? I suppose there are more 
> parameters...

See below; it's not about changing the alchemy; it's about changing it now.

>> What's the reason why we are considering to raise the limit again? 
> 
> To improve performance on opennet, in the average case, for slow nodes, and 
> for fast nodes?
> 
>> It's  
>> not the top-priority on the uservoice thingy anymore. Anyway, I remain 
>> convinced that ~50 votes is irrelevant (especially when we consider that 
>> a single user can give 3 voices to the same task!) and that we shouldn't 
>> set priorities depending on what some "vocal" users are saying.
>>
>> They are concerned by their bandwidth not being sucked up? Fine! Turn 
>> them into seednodes, create a distribution toadlet, create a special 
>> mode where they would only serve UoMs (and would be registered by 
>> seednodes as such)... They are plenty of solutions to max out their 
>> upload bandwidth usage if that's what they want their node to do!
> 
> Don't you think that more opennet peers for fast nodes, and maybe fewer for 
> really slow nodes, would improve performance for everyone?

Fewer peers for slow nodes would help in terms of latency; I'm not sure 
about more for fast nodes.

> Given that our 
> current load management limits a node's performance by the number of its 
> peers multiplied by the average bandwidth per peer on the network?
> 

IMHO it's a lot more trickier to do than to bump one constant! Anyway, 
that's not the point: the point is you're about to merge a new 
client-layer AND considering to change yetAnotherParameter which might 
have network wide effects in the meantime. All of that in a short 
timeframe and right before a release (unless I missed something the 
release is still planned for "soon")!

It's bad practice.

Suppose things get screwed up (or drastically improve): how will you say 
which of your changes have caused it? It's not like the theoreticians 
knew for sure what the effects are going to be: they said "it shouldn't 
break things" ... not that it won't or can't.

Reply via email to