On Mon, 2003-11-17 at 07:18, Martin Stone Davis wrote:
> Edward J. Huff wrote:
> > But, to return to the original subject..., there is an argument for
> > closing connections instead of QR-ing when load is high.  If a node
> > closes connections, it continues to accept requests from the
> > connections it has.  I argue that really, the node had too many
> > open connections to begin with, and probably it won't want to
> > open the connections again very soon.  Also, when the backlog
> > is cleared, the node can open connections instead of sitting
> > idle until the QR back-off timers expire.
> 
> I've heard Toad say that establishing connections is very costly, so 
> that's why we leave lots of connections open.

Well, what I want to do is to allow the node to be in control of
the backoff time, as follows:

When a fluctuation of the success rate results in excess trailers,
the node estimates how long they will take to transmit.  Then it
informs all of its connected peers that it is going to ignore all
queries _that is has not already responded to_.  And if there are
no sequence numbers in messages, this application requires them
because the other node needs to know exactly which queries this
statement applies to.  (This solves the "in transit" problem).

The message sent to the connected peers is a contract:  the
node promises to ignore a specific set of messages, starting
with a particular sequence number, and as yet open ended.
Call this the overload message.  It amounts to a reply to
all of the in-transit messages, and a promise to ignore any
and all subsequent messages.

When the node estimates that it can safely accept queries, it
will send a second message to all connected peers, call it an
overload cleared message.  This message contains a token which
the peers must present in the next request.  Any requests still
in the pipeline will be ignored until one with the token arrives.

Can anyone explain how this protocol is vulnerable to attack?

> 
> > 
> > Right now, my node has connections only to nodes which do
> > back-off.  It has 148 connections to 82 different nodes,
> > (counting some old builds which got blocked at the firewall
> > within 5 minutes of connecting) but apparently all of them
> > are backed-off because it is getting only about 300 queries
> > per hour.  It is completely idle, with no connections 
> > transferring.  If it had closed connections instead of
> > QR-ing, it would be able to re-open the connections now
> > and get more work.
> 
> Maybe this is a result of those nodes backing off too much.  Ken Corson 
> recently suggested that we should be using linear instead of exponential 
> backoff (see "Additional ways to reduce load aside from QR").
> 
That's this thread...  Actually I quoted <edt> on #freenet
suggesting that, and said I agreed.  Ken Corson said he agreed
too.

I increased the size of my routing table so that if many nodes
are backed off in my table, I still have some to send to.

Also, because I am blacklisting old nodes, I don't want to
forget about new nodes which I might otherwise not be able
to reach.

Results:  I see fluctuations in my upstream bandwidth, and
in my localQueryTraffic, but the success ratio is staying
above 50% and is often 100% for hours at a time.  I guess
those are the hours when the upstream bandwidth is not being
fully used.

11/16/03  5:00:00 PM EST   51   51 1.0
11/16/03  6:00:00 PM EST   11   11 1.0
11/16/03  7:00:00 PM EST    6    6 1.0
11/16/03  8:00:00 PM EST  314  158 0.503
11/16/03  9:00:00 PM EST  338  280 0.828
11/16/03 10:00:00 PM EST  478  420 0.878
11/16/03 11:00:00 PM EST  800  577 0.721
11/17/03 12:00:00 AM EST 1405  972 0.691
11/17/03  1:00:00 AM EST 1019  567 0.556
11/17/03  2:00:00 AM EST  281  261 0.928
11/17/03  3:00:00 AM EST  240  182 0.758
11/17/03  4:00:00 AM EST   90   70 0.777
11/17/03  5:00:00 AM EST  747  533 0.713
11/17/03  6:00:00 AM EST  321  313 0.975
11/17/03  7:00:00 AM EST  128  128 1.0
11/17/03  8:00:00 AM EST  747  538 0.720
11/17/03  9:00:00 AM EST   80   80 1.0
11/17/03 10:00:00 AM EST   75   75 1.0
11/17/03 11:00:00 AM EST   47   47 1.0
11/17/03 12:00:00 PM EST   74   72 0.972
11/17/03  1:00:00 PM EST  363  358 0.986
11/17/03  2:00:00 PM EST  347  314 0.905
11/17/03  3:00:00 PM EST   58   58 1.0
11/17/03  4:00:00 PM EST  128   87 0.679
11/17/03  5:00:00 PM EST  317  274 0.864
11/17/03  6:00:00 PM EST  247  190 0.769
11/17/03  7:00:00 PM EST  124   99 0.798
11/17/03  8:00:00 PM EST  192  186 0.969
11/17/03  9:00:00 PM EST  805  610 0.758
11/17/03 10:00:00 PM EST  203  200 0.985
11/17/03 11:00:00 PM EST  190  190 1.0
11/18/03 12:00:00 AM EST   29   29 1.0
11/18/03  1:00:00 AM EST  626  538 0.859
11/18/03  2:00:00 AM EST 3631 1462 0.403
11/18/03  3:00:00 AM EST 2195 1291 0.588
11/18/03  4:00:00 AM EST 1767 1110 0.628
11/18/03  5:00:00 AM EST 1229  874 0.711
11/18/03  6:00:00 AM EST  408  344 0.843
11/18/03  7:00:00 AM EST  394  354 0.898

Actual bandwidth usage calculated from /proc/net/dev eth0 values
(bytes/sec averaged over each hour).  Bandwidth usage is below 
the specified level.  High level bandwidth limitation is set at 
5000 bytes/sec, which used to give about 6 or 7k/sec.

Sun Nov 16 17 00 01    565.06         783.484
Sun Nov 16 18 00 06   2190.72        1021.04
Sun Nov 16 19 00 11   1015.17        1459.3
Sun Nov 16 20 00 15   1394.62        3411.37
Sun Nov 16 21 00 18   2648.47        2941.59
Sun Nov 16 22 00 22   1851.6         2068.36
Sun Nov 16 23 00 25   2577.88        3279.17
Mon Nov 17 00 00 29   2396.43        3770.62
Mon Nov 17 01 00 33   4501.31        4905.91
Mon Nov 17 02 00 37   2824.98        3481.43
Mon Nov 17 03 00 41    745.686       1003.12
Mon Nov 17 04 00 45   1613.12        2060.82
Mon Nov 17 05 00 52   1876.16        2841.57
Mon Nov 17 06 00 56   1249.46        1638.78
Mon Nov 17 07 00 00   3341.46        1633.44
Mon Nov 17 08 00 05   4606.12        3772.24
Mon Nov 17 09 00 08   2454.26        1291.09
Mon Nov 17 10 00 12   1819.83        1310.21
Mon Nov 17 11 00 16   1258.87        1282.66
Mon Nov 17 12 00 20   1514.59        1387.27
Mon Nov 17 13 00 24   1193.66        1129.55
Mon Nov 17 14 00 28   3412.49        1645.13
Mon Nov 17 15 00 32   3044.33        1779.14
Mon Nov 17 16 00 36   3164.1         2386.25
Mon Nov 17 17 00 40    972.998       1750.07
Mon Nov 17 18 00 44   1081.24        2468.52
Mon Nov 17 19 00 48    970.305       1654.31
Mon Nov 17 20 00 51   1080.41        1383.1
Mon Nov 17 21 00 55   1886.64        2261.88
Mon Nov 17 22 00 59   1120.69        1307.26
Mon Nov 17 23 00 03    899.392       1241.07
Tue Nov 18 00 00 07   2885.65        1458.31
Tue Nov 18 01 00 11   3770.29        3319.74
Tue Nov 18 02 00 15   4864.45        5758.58
Tue Nov 18 03 00 19   3513.41        4890.75
Tue Nov 18 04 00 23   3106.09        4167.83
Tue Nov 18 05 00 28   2301.67        3417.71
Tue Nov 18 06 00 32   1674.65        1944.18
Tue Nov 18 07 00 35   1519.93        2244.52

-- Ed Huff

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to