>
> Bandwidth
>
> The more servers, the less likely a single DDoS attack is to derail
> the network. That aside, several measures could be incorporated
> to reduce the risk of DoS - limit the size and number of the initial
> packets sent for a "handshake" for the server. Provide a method of
> verifying the IP address the server is sending to has a freenet
> client on it, and was the initiator of the request. Also, not having
> a way of discovering all the servers on the freenet network would
There is only one handshake exchanged, and its fairly small. This is
also obviated by the connection limits, below.
> Anonymity
>
> This, IMO, is the biggest architectural issue facing Freenet. We
> all know once Freenet becomes stable and operational, people are
> going to want to monitor what is going on - specifically who is
> tranferring what. The RIAA, for example, or the NSA. I don't know
> enough about Freenet to comment on how this will be accomplished,
> but it is something that *will* be attacked.
Other than adding Onion routing, this is pretty well under
control. Obviously this can't be mathematically proven to be under
control, but I'm not aware of any significant flaws.
> Server Utilization
>
> By simply overloading a server with requests, you could grind it
> to a halt. For example, if there was a bug in the protocol that
> allowed you to request that a file be wrapped in several different
> encryption schemes and then sent an attacker could request that a
> large file be encrypted with multiple keys - exhausting available
> memory.
This doesnt really matter, since the server never does any
decryption. The only data transformation it does is the link-to-link
encryption/decryption, which is only done once.
Overloading a server isn't really a problem. There is a connection limit
after which the server stops responding. Its currently set to 50, which
is large enough for a normally operating server never to run into a
problem, but small enough that the server stops responding fairly quickly
when its being attacked in this manner.
Remember than unlike a centralized server such as a website, we actually
*want* a node to become unavailable quickly, because the rest of the
network is there still functioning normally. We effectively discourage
flood attacks by saying that it really doesnt matter if you flood a single
node. You'd have to flood hundreds.
>
> Think of Freenet as a distributed database - you have a primary
> key which is unique to that piece of data (as any RDBS will have
> these days). Simply make the key data-dependent (md5sum) and you
> eliminate duplication. A side-effect of this is you then need to
> block the data to discrete sizes (1k, 2k, 4k, whatever). From
> a performance standpoint, however, I can't complain - it would
> make seeking through the database (or cache, or file, whatever)
> less resource-intensive.. since you know the offset ahead of time,
> instead of having to seek to it.
The key is data depended for CHKs, please read the key bestiary (off the
website). SVKs are public-key dependent and rely on a signature. KSK ==
SVKs. When we refer to file splitting, we mean having several CHKs that
are recombined to form a single document. Each part (CHK) is protected
from tampering by virtue of being a CHK.
Scott
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 232 bytes
Desc: not available
URL:
<https://emu.freenetproject.org/pipermail/devl/attachments/20000802/a1d01f72/attachment.pgp>