Our current approach to packet sizes has several major disadvantages. I will 
explain them in a moment, first, here's what we do:

All messages are held for up to 100ms for coalescing as many messages as 
possible into one packet, and are sent in priority order. We send a packet 
anyway if we have more than MTU-100 bytes of data to send. Base overhead is a 
minimum of 27 bytes, plus one byte per ack/resend request/etc. After that, 
the packet is padded to the next 64 byte boundary, and then we add between 0 
and 63 bytes. This is all random padding. Bulk and block transfers are split 
into 1024 byte blocks, which are passed to the coalescing queue as often as 
congestion control and bandwidth limiting will allow.

Most messages are between 8 and ~14 bytes. CHK found, CHK data insert, and 
some opennet messages, are up to 42 bytes. Messages which include keys range 
from 52 to 107 bytes (SSK keys are 64 bytes, CHK keys are 32). The last phase 
of swapping involves 188 byte messages. Block and bulk transfer messages, SSK 
requests, pubkeys, and inserts, and times-last-few-messages-were-received (on 
connect, to identify if we are NATed) messages, are over 1024 bytes, some of 
them quite a lot over it.

By rewriting the transport layer, we can reduce the base overhead to maybe 
10-16 bytes. The hash (32 bytes) will be replaced by a MAC of the same size. 
We could reduce its size for smaller packets, at the cost of some security. 
We could limit the security cost by doing a MAC at a higher layer (after 
reassembling a stream), but this won't work if there isn't a stream to 
reassemble: right now, there isn't, short packets contain a few short 
messages and a few acks. We could implement an in-order stream to put the 
short messages in, and then do a MAC before passing them on, but that would 
cost quite a bit of latency.

The original design goals were to minimise request latency, and maximise block 
transfer efficiency.

Problems with current system
===================

Security:

- We remain fairly vulnerable to traffic analysis based on packet size. 
Because a few messages are HUGE, they can probably be traced across the 
network by a global passive traffic analyser. We're being reasonably open 
about this since They undoubtedly already know this!

- Stego transports will not work well with the current approach. We want a 
transport mimicing realplayer streams to have packet sizes equivalent to 
those of realplayer streams, for example.

Reliability:

- Those few messages may not fit within the available MTU. This will cause 
*severe* problems on some connections.

Performance:

- Because block transfer data is sent in large chunks, it is usually not 
combined with small messages. Therefore, the small messages are heavily 
padded (as well as suffering the significant crypto overhead on their own). 
Of course, the big packets have a relatively low overhead.

Proposed solution
=============

All big messages are converted into streams. Small messages such as 
FNPAccepted should remain as messages.

The packet size does not depend directly on the details of the pending 
messages. We determine the packet size by reference to a target profile, of 
whatever we are trying to mimic (e.g. some realplayer codecs send a lot of 
660 byte packets; some send a lot of 330 byte packets; skype sends a lot of 
68 byte packets). If we're not using a steganographic transport, there are 
other means to decide on a packet size - a fixed 256 bytes maybe, or a range. 
However, this only applies if we are currently sending streams to the node. 
If we not, we have essentially the same problem we have now: high packet 
overhead. The only way to solve this at this level would be to bundle 
messages into a stream anyway, but that would cost latency... However, not 
having any streams in progress is IMHO the result of a higher level problem.

With the new transport layer, all packets are subject to both bandwidth 
limiting and congestion control.

Streams (including bulk and block transfers) are no longer sent in 1kB chunks. 
They are sent in byte ranges. A packet containing a few messages will be 
padded up to the target size by adding data from streams (in priority order, 
of course), not by adding random data.

Advantages of proposed solution
=====================

Security:

- Greatly reduced vulnerability to traffic analysis. No information can be 
derived from the size of an outgoing packet.

- Compatible with most packet-based stego transports, although for really 
small packets some changes would be needed.

Reliability:

- Compatible with any MTU down to modem sizes (576 bytes).

Performance:

- For any peer which has a transfer in progress (including e.g. SSK 
inserts/requests), we should achieve a very good payload percentage. However, 
if there are no pending streams, we don't achieve good results: if there are 
no steganographic concerns, and no pending streams, we could send packets as 
we do now (big enough to fit the messages to send plus some padding).
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20080306/687c31db/attachment.pgp>

Reply via email to