On Sat, Jun 24, 2006 at 10:34:41AM -0400, Colin Davis wrote: > > 1) Users tend to prefer Speed to Anonymity-
Then they can use bittorrent. For the most popular files, bittorrent will always be faster than Freenet. For medium popularit files it is possible that they will be findable, and downloadable, considerably faster than on bittorrent, because of its being a distributed datastore. > a) Look at the Success of networks like Bittorrent- All the peers > downloading a file are completely exposed, but people enjoy using it > because they can get a file quickly. Until the RIAA busts the tracker. > b) While the focus of Freenet is different, we can still let USERS > make that tradeoff. We do ! We can't stop people from connecting to ubernodes. And whatever measures we impose for the user's and the network's protection can be overridden by the user, as he has the source code. However, it is entirely legitimate for us to advise against users using ubernodes, and even to design the load balancing system in such a way as to not accept more requests than we can actually *route*, as opposed to dumbly forward to our one and only ubernode. > I) There are a lot of tweaks that could be made, to make > things faster. > * Increasing the check for new editions > exponentially, for instance > * Or fully utilizing ubernodes > II) As it is, there are people, such as SinnerG, Apophis, > and myself, who are BEGGING to make freenet faster! > > 2) Freenet is about giving the users control. > a) The project should give users control whenever possible, assuming > it doesn't remove significant security from others > I) If a user wants to route their data through a fast > server, shouldn't we give them that option? We do. There is nothing stopping you from connecting to an ubernode as of now. > > b) Trust levels, as mentioned by Toad on the Devl mailing list are a > good start, but there are more trusts that can be done. > I) Lets say I trust my friend quite a bit, and set him to a > high trust level.. Why not fully utilize his connection to me, > if it's > otherwise empty? What do you mean by "fully utilize" ? The amount of traffic going through the link is limited by several factors: - The capacity of the link (when you factor in all the other users of that path). - The capacity of downstream nodes. - The number of requests which are answered locally by the node. - The current routing situation. > II) If I've set him to a high trust level, I'm presumably OK > routing more requests through his node. > * As it is, requests are more or less random among > non-backed off peers. This is absolutely not true. Requests are forwarded to the node closest to the target (which isn't backed off), period. We have seen what happens when performance is the primary criterion for routing in 0.5. It sucks. It is vital that the network has real routing. That is the only way it can scale beyond the capacity of a single ubernode (a few terabytes at most, and on a big network it won't have enough bandwidth either), and into more interesting realms. > * If I trust my friend, I'd be OK preferring to send > through him Routing your requests to him, even though they are supposed to go somewhere else, would be misrouting. This would cause the request to either not find the data it was looking for (or on inserts to send it completely the wrong place so it isn't findable later), or at the very least to travel more hops than it has to. The result is that the whole network has to handle more load. The result of that is that the whole network becomes overloaded. > c) Implementing a NG-style, stochastic modeling system ensures that > users are properly utilizing resources. See above. Performance isn't everything. You also have to distribute load, and especially storage, and the way to do that is through routing. > > > > 3) The current strategy is fighting a symptom, not the problem. > > a) We can already achieve Ubernode-like results using bands of > smaller nodes. > I) If I set up 10 mini-nodes, all inner linked, and each > connected to 10-15 peers, I could harvest just as much data on > net network > II) The network would see these as different nodes, and > fully utilize them. > III ) Multiple IP addresses to run on are cheap ;) Sure, you can exploit #freenet-refs just as you can exploit any other harvestable channel. So what? > > b) The problems with Ubernodes are mitigated if the data is stored > other places as well. > I) If freenet used proper NG-style modeling, it would always > draw from the fastest source, which is usually going to be > a point between the ubernode, and the direct user. > II) Once the user has downloaded it, by default it's in his > node anyway. Then what's the point in having ubernodes? > > c) Let's find ways of working to utilize freenet nodes fully, and > safely, so that when the bad guys come, and start EvilNodes, we're > already doing well enough that people don't flock to them. Routing 1Gbps of traffic to a node just because we happen to be on the same LAN as it does not constitute "utilizing freenet nodes fully". We cannot justify misrouting simply because a node is fast. At best we get severely reduced network capacity. At worst we get meltdown. Both symptoms were very obvious in 0.5, which had NGR. > > > 4) We want more people to use Freenet- This brings more nodes to > route, more exposure, and MORE MONEY, which means more dev-time. > > a) As it is, the network is awash with Backoffs. Indeed. There is a problem with load limiting. Which we are well on the way to fixing. There may also be other bugs causing timeouts etc, but I haven't found any recently. > I) We're not entirely sure how to fix it. We have several good ideas with strong theoretical bases which are about to be simulated by mrogers for his google SoC project. > II) Some of the solutions proposed seem more like guesses. The reason they haven't been implemented yet is that we have decided not to do any more "alchemy". This means that we need to simulate such radical changes before deploying them. > b) Users are more likely to use a faster net > I) People get frustrated with freenet speed. > II) It's a lot better than .5, but it's a LOT slower than it > should be. I was under the impression that it's remarkably fast recently, at least in between the backoffs. And if the new incompressible-fluid-flow load limiting system works then it should be very fast - and combined with a new storage algorithm, it should have an immense capacity. > III) People join things just for speed- See 1) above. > c) The more people who use the network, the more money the network > brings in > d) We can utilize ubernodes now, and move back later. > I) Right now, Ubernodes are one of the best tools for making > the network run faster. So use them. I'm not stopping you. But misrouting will kill the network. > II) After the network is bigger, we can back off of them. > * The network will natually back off from them- The > can't keep up with 10000 users, for one. For another, > No one node can > compete with 10000 smaller nodes. > III) Let's take the advantage in the short term, so that we > can better build the long term. We do. Many people do. I draw the line at routing traffic to a node which is simply not appropriate for it. > I'll be happy to discuss this with anyone who's interested. It's a > serious issue, and I'm trying to go about things the Right way. -- Matthew J Toseland - [EMAIL PROTECTED] Freenet Project Official Codemonkey - http://freenetproject.org/ ICTHUS - Nothing is impossible. Our Boss says so.
signature.asc
Description: Digital signature
_______________________________________________ chat mailing list chat@freenetproject.org Archived: http://news.gmane.org/gmane.network.freenet.general Unsubscribe at http://emu.freenetproject.org/cgi-bin/mailman/listinfo/chat Or mailto:[EMAIL PROTECTED]