----- Original Message ----- From: "Tom Kaitchuck" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Sunday, August 03, 2003 11:47 PM Subject: Re: [Tech] freenet not suited for sharing large data
> On Sunday 03 August 2003 03:16 pm, Gabriel K wrote: > > I'm sorry if it sounded like I thought there was a distinction. In FreeNet > > there is no such distinction, true. > > However, if you DO have such a distinction, one mechanism to search, and > > one to request, then it is much easier to set the number of proxies. > > That is all well an good if you only want to conseal the sender and the > recovers identity, but what about the host's. One of Freenet's main goals is > to prevent the NETWORK from being attacked. I'm not sure I follow you here... what about the hosts? Surely you are not saying that any protocol that has one mechanism to search, and one to request leaves the hosts exposed? I see no such connection. And in freeNet, as far as I understand, if you request a document you cannot see who is hosting it anyway.. so what is you question? > > Hmm... when you say "the network", and "your computer".. the differance is > > only that the network constitues of whole bunch of "your computers".. so I > > don't see it as a big plus when you say that ppl don't have to connect to > > YOUR computer to get the file. It doesn't really matter which computers > > they connect to. It is good that the load of downloading a file is > > distributet as much as possible to avoid bottle necks, as freeNet and > > bitTorrent does. > > The advantage is that they only connect to those they already connect to. Also > you don't have to have a two phase download. AND "your computers" are not > limited to people who have downloaded the file. Well, it is good that a node only needs to "talk" to nodes it is already connected to for the sake of anonymity, BUT this can also be very bad for the sake of BW. I suggest that a node has a few neighbours, and that any data request is always recieved through the same proxy. So a node that tries to explore which nodes are in the net will always "see" the same thing. And it should not help if he rejoins the net several times. > > You mean because the file is split up in chunks and distributed in the > > network? Yeah that's true, and indeed very good. > > However it requires for the share to be uploaded... and as I said before, > > uploading 60GB data is not so much fun :) > > Well, you have to upload it one way or the other. IE: on request or up front. As I said before, 1) You only need to upload what is explicitly requested 2) if you must upload to the network first, the amount of proxies and thus BW usage is the double compared to sending to the requester at once through a chain of proxies. > > About frost.. it seems pretty slow.. first proxy the data into the network, > > then let the requester know that it's available now (as soon as a chunk is > > uploaded), then proxy the data back to the requester... lots of proxies > > there right? > > First, yes frost is slow. Inserts make it slower. Inserts are given the lowest > priority of anything a node does. This is because, presumably nobody is > siting and waiting for an insert to finish, when surely they are waiting on a > request. So I guess this inserting when requested is not very good then? > Please distinguish between proxies and intermediate nodes. For inserts and > requests Freenet uses a mixnet approach for the first hop. It sends an > message one node to contact another node and deliver a message to it. That > message tells it to request the data and forward it back. This is totally > optional. (And disabling this will speed things up) This is enough to protect > both the sender and receivers identity, but it does not protect the host. 1) Hmm I don't understand the differance between proxy and intermediate node.. for me, every node in the network acts anonymizing, and thus is a proxy. 2) By host in this case, I assume you mean the node responsible for storing a specific chunk of data? Please explain why and how you think it is not protected. And in which case? In freeNet or the "ideal" protocol i wish existed? > Having the intermediate nodes does not really slow your overall bandwith. Yes, > there is a set delay for the data to pass through all of them, but you can > simply make more requests while you wait. Even a ISDN can have 200 outgoing > connections. If you can't receive the data fast enough to get it from all of > them once it starts comming in it will be waiting one hop away. Ok now I need to explain what I really mean with lessened bandwith for the network. Let's say you have a network with nodes that all have 10Mbit/s connections (full duplex). All nodes want to download as many files as they can at any given time. If you could do this without acting as a proxy for another transfer, they could download at 10Mbit/s. If the network uses one proxy, they can download at 5Mbit/s, and with two it would be 3,33Mbit/s and so on.. So, the more proxies, the more overhead BW is used. If only a small part of the nodes download at a given time, then this overhead will not be noticable, because the nodes downloading might not have to proxy at all. I want this "upper limit" to be as high as possible, because I think one should assume a high activity in the network! Thus only one or two proxies should be used for the actual file transfer! > > So will it be possible to have only ONE proxy, if the downloader, so > > desires? > > Let's say he think it's safe enough for him? > > You can have one or even no proxies. However, you can never set, or even know > how many intermediate nodes their are. If you want to reduse that number as a > whole, you can reduse the probability that YOUR node will act as one. However > whenever it chooses not to, it learns nothing about routing. It's good that you can have zero proxies in between and still not compromise the degree of anonymity. BUT in most cases I think there will be too many proxies. I think a good protocol should always have FEW intermediate nodes. And my POINT was that freeNet has too many proxies in average for datatransfers! The average should be 1 or 2 at any given time! > If your goal is to reduse latency and speed up large file transfers, I > proposed a system that would allow Freenet to make use of nodes that already > downloaded the data. Under this system, if you wanted, you could insert just > the manifests for all the data you wanted to insert. Then request your data. > That way you wouldn't have to upload anything until someone else wanted it. > It would work with and take advantage of existing Freenet architecture. This is good, but as I said in another mail, it is still "insert through proxies, retrieve through proxies", all in all too many proxies. > You can read my proposal in the archives. (last month under ".torrents and > Freenet") However nobody has expressed any intrest in it. I have thought of > many improvements upon my original proposal (including how to greatly improve > the level of anonymity), so if anyone reading this is interested, e-mail me, > and I could write some pseudo code describing exactly what needs to be done. I'm not interested in code myself, only the theory. Do you have paper describing it? > > Btw, I read in some paper, think it was ACHORD or CHORD, that a thing about > > freeNet is that you aren't guaranteed to find the data you are looking for > > if it exists? Because you have to have a TTL on the request to avoid > > infinite looping problems? True or false? (I read about freeNet some time > > ago..) > > Correct, there are no grantees. Freenet borrows many ideas form CHORD. But I'm almost certain that CHORD DOES provide that guarantee? CHORD uses no HTL or TTL, it is deterministic isn't it? If the data is in the network it will be found right? The fact that FreeNet does not guarantee this is bad IMO. The design should be such that there are no looping problems. /Gabriel _______________________________________________ Tech mailing list [EMAIL PROTECTED] http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/tech
