[freenet-CVS] freenet/src/freenet/transport tcpConnection.java, 1.45, 1.46
Update of /cvsroot/freenet/freenet/src/freenet/transport In directory sc8-pr-cvs1:/tmp/cvs-serv11389/src/freenet/transport Modified Files: tcpConnection.java Log Message: prevent a NullPointerException when starting the node without bandwidth limits Index: tcpConnection.java === RCS file: /cvsroot/freenet/freenet/src/freenet/transport/tcpConnection.java,v retrieving revision 1.45 retrieving revision 1.46 diff -u -w -r1.45 -r1.46 --- tcpConnection.java 21 Oct 2003 23:16:23 - 1.45 +++ tcpConnection.java 22 Oct 2003 06:24:07 - 1.46 @@ -77,14 +77,21 @@ // Start NIO loops try { if(rsl == null) { + if (ibw != null) rsl = new ReadSelectorLoop(ibw, Main.timerGranularity); + else + rsl = new ReadSelectorLoop(); Thread rslThread = new Thread(rsl, " read interface thread"); rslThread.setDaemon(true); // rslThread.setPriority(Thread.MAX_PRIORITY); rslThread.start(); // inactive until given registrations } if(wsl == null) { + if (obw != null) wsl = new WriteSelectorLoop(obw, Main.timerGranularity); + else + wsl= new WriteSelectorLoop(); + Thread wslThread = new Thread(wsl, " write interface thread"); wslThread.setDaemon(true); // wslThread.setPriority(Thread.MAX_PRIORITY); ___ cvs mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/cvs
[freenet-dev] Re: Freenet network size
Toad wrote: There is no reason for it all to be cached on a single store. Which seems to be what you are suggesting. But the Windoze installer defaults to 10% of available disk space, and unix users will probably generally set a larger store than the default. Are you sure that's true about the Windoze installer? I have 6GB free on a 80GB hard drive and when I delete the freenet.ini and launch freenet.jar with --config, I get it defaulting to 256M. Seems like it should have defaulted to 600M. When I installed freenet months ago, it defaulted to 256M as well. -Martin ___ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
[freenet-dev] 6264 NPE
java.lang.NullPointerException java.lang.NullPointerException at freenet.node.ds.StoreIOException.toString(StoreIOException.java:10) at freenet.node.ds.StoreIOException.toString(StoreIOException.java:10) at java.lang.String.valueOf(String.java:2131) at java.lang.StringBuffer.append(StringBuffer.java:370) at freenet.node.states.request.Pending.receivedDataReply(Pending.java:435) at freenet.node.states.request.TransferInsertPending.receivedMessage(TransferInsertPending.java:198) at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:324) at freenet.node.State.received(State.java:126) at freenet.node.StateChain.received(StateChain.java:195) at freenet.node.StateChain.received(StateChain.java:249) at freenet.node.StateChain.received(StateChain.java:71) at freenet.node.StandardMessageHandler$Ticket.run(StandardMessageHandler.java:234) at freenet.node.StandardMessageHandler$Ticket.received(StandardMessageHandler.java:172) at freenet.node.StandardMessageHandler$Ticket.access$100(StandardMessageHandler.java:124) at freenet.node.StandardMessageHandler.handle(StandardMessageHandler.java:72) at freenet.Ticker$Event.run(Ticker.java:323) at freenet.thread.YThreadFactory$YThread.run(YThreadFactory.java:190) j. ___ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
[freenet-dev] exception in 6264
java.lang.NullPointerException at freenet.interfaces.BaseLocalNIOInterface.intAddress(BaseLocalNIOInterface.java:37) at freenet.interfaces.BaseLocalNIOInterface.hostAllowed(BaseLocalNIOInterface.java:189) at freenet.interfaces.BaseLocalNIOInterface.dispatch(BaseLocalNIOInterface.java:230) at freenet.interfaces.NIOInterface.acceptConnection(NIOInterface.java:98) at freenet.transport.tcpNIOListener.accept(tcpNIOListener.java:103) at freenet.transport.ListenSelectorLoop.processConnections(ListenSelectorLoop.java:106) at freenet.transport.AbstractSelectorLoop.loop(AbstractSelectorLoop.java:681) at freenet.transport.ListenSelectorLoop.run(ListenSelectorLoop.java:146) at java.lang.Thread.run(Thread.java:534) ___ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
[freenet-CVS] freenet/src/freenet Version.java,1.456,1.457
Update of /cvsroot/freenet/freenet/src/freenet In directory sc8-pr-cvs1:/tmp/cvs-serv18803/src/freenet Modified Files: Version.java Log Message: d'oh. 6264 for bwlimiting changes. Index: Version.java === RCS file: /cvsroot/freenet/freenet/src/freenet/Version.java,v retrieving revision 1.456 retrieving revision 1.457 diff -u -w -r1.456 -r1.457 --- Version.java21 Oct 2003 01:57:49 - 1.456 +++ Version.java21 Oct 2003 23:33:48 - 1.457 @@ -18,7 +18,7 @@ public static String protocolVersion = "1.46"; /** The build number of the current revision */ -public static final int buildNumber = 6263; +public static final int buildNumber = 6264; // 6028: may 3; ARK retrieval fix public static final int ignoreBuildsAfter = 6500; ___ cvs mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/cvs
[freenet-CVS] freenet/src/freenet/node Main.java,1.273,1.274
Update of /cvsroot/freenet/freenet/src/freenet/node In directory sc8-pr-cvs1:/tmp/cvs-serv16093/src/freenet/node Modified Files: Main.java Log Message: 6264: MAJOR improvement to low level bandwidth limiting... set a minimum send size, of the bandwidth that would be used in 20 timer granules, (20ms on Sun JVMs...), and send that many bytes if possible on each throttle cycle. Logging, including the interesting objects dump now includes memory stats. Index: Main.java === RCS file: /cvsroot/freenet/freenet/src/freenet/node/Main.java,v retrieving revision 1.273 retrieving revision 1.274 diff -u -w -r1.273 -r1.274 --- Main.java 21 Oct 2003 01:52:36 - 1.273 +++ Main.java 21 Oct 2003 23:16:22 - 1.274 @@ -76,6 +76,7 @@ static NodeConfigUpdater configUpdater = null; static FnpLinkManager FNPmgr = null; public static RoutingTable origRT; + public static int timerGranularity; static public FileLoggerHook loggerHook() { return loggerHook; @@ -332,6 +333,8 @@ freenet.session.FnpLink.AUTH_LAYER_VERSION=0x05; } + timerGranularity = calculateGranularity(); + //runMiscTests(); // NOTE: we are slowly migrating stuff related to setting up @@ -2753,7 +2756,12 @@ public static void dumpInterestingObjects() { if(Core.logger.shouldLog(Logger.MINOR)) { + Runtime r = Runtime.getRuntime(); + long totalMem = r.totalMemory(); + long memUsed = totalMem - r.freeMemory(); String status = "dump of interesting objects after gc in checkpoint:"+ + "\nMemory used: "+memUsed+ + "\nTotal allocated memory: "+totalMem+ "\ntcpConnections " +freenet.transport.tcpConnection.instances+ "\ntcpConnections open " +freenet.transport.tcpConnection.openInstances+ ((freenet.transport.tcpConnection.openInstances-freenet.transport.tcpConnection.instances > 2) ? "\n ERROR: MORE OPEN THAN EXTANT! " :"")+ @@ -3284,4 +3292,34 @@ enlargeFile(-1, length); } } + + static int calculateGranularity() { + long prevTime = System.currentTimeMillis(); + int count = 0; + int minGranularity = Integer.MAX_VALUE; + int maxGranularity = 0; + int total = 0; + while(true) { + long now = System.currentTimeMillis(); + int diff = (int)(now - prevTime); + if(diff != 0) { + Core.logger.log(Main.class, "Timer granularity no more than "+ + diff, Logger.DEBUG); + count++; + total += diff; + if(diff < minGranularity) minGranularity = diff; + if(diff > maxGranularity) maxGranularity = diff; + if(count > 100) break; + } + try { + Thread.sleep(1); + } catch (InterruptedException e) {} + prevTime = now; + } + int avgGranularity = total / count; + Core.logger.log(Main.class, "Granularity is between "+minGranularity+ + "ms and "+maxGranularity+"ms, average is "+ + avgGranularity, Logger.MINOR); + return avgGranularity; + } } ___ cvs mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/cvs
[freenet-CVS] freenet/src/freenet ConnectionHandler.java,1.192,1.193
Update of /cvsroot/freenet/freenet/src/freenet In directory sc8-pr-cvs1:/tmp/cvs-serv16093/src/freenet Modified Files: ConnectionHandler.java Log Message: 6264: MAJOR improvement to low level bandwidth limiting... set a minimum send size, of the bandwidth that would be used in 20 timer granules, (20ms on Sun JVMs...), and send that many bytes if possible on each throttle cycle. Logging, including the interesting objects dump now includes memory stats. Index: ConnectionHandler.java === RCS file: /cvsroot/freenet/freenet/src/freenet/ConnectionHandler.java,v retrieving revision 1.192 retrieving revision 1.193 diff -u -w -r1.192 -r1.193 --- ConnectionHandler.java 21 Oct 2003 11:04:52 - 1.192 +++ ConnectionHandler.java 21 Oct 2003 23:16:22 - 1.193 @@ -752,7 +752,9 @@ if(logDEBUG) logDEBUG("Returning to RSL because decrypt buffer empty"); // return 1; // buffer empty } + if(logDEBUG) logDEBUG("Leaving synchronized(x)"); } + if(logDEBUG) logDEBUG("Left synchronized(x)"); } catch (InvalidMessageException e){ //almost copypasted from below Core.logger.log(this, "Invalid message: " + e.toString()+" for "+this, e, Logger.MINOR); invalid++; @@ -776,6 +778,7 @@ continue; // Don't drop the next message } } + if(logDEBUG) logDEBUG("Left try{}"); //at this point we have returned succesfully from tryPrase //if m is null, we need more data if (m==null) { ___ cvs mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/cvs
[freenet-CVS] freenet/src/freenet/transport ReadSelectorLoop.java, 1.56, 1.57 ThrottledSelectorLoop.java, 1.25, 1.26 WriteSelectorLoop.java, 1.62, 1.63 tcpConnection.java, 1.44, 1.45
Update of /cvsroot/freenet/freenet/src/freenet/transport In directory sc8-pr-cvs1:/tmp/cvs-serv16093/src/freenet/transport Modified Files: ReadSelectorLoop.java ThrottledSelectorLoop.java WriteSelectorLoop.java tcpConnection.java Log Message: 6264: MAJOR improvement to low level bandwidth limiting... set a minimum send size, of the bandwidth that would be used in 20 timer granules, (20ms on Sun JVMs...), and send that many bytes if possible on each throttle cycle. Logging, including the interesting objects dump now includes memory stats. Index: ReadSelectorLoop.java === RCS file: /cvsroot/freenet/freenet/src/freenet/transport/ReadSelectorLoop.java,v retrieving revision 1.56 retrieving revision 1.57 diff -u -w -r1.56 -r1.57 --- ReadSelectorLoop.java 14 Oct 2003 23:59:01 - 1.56 +++ ReadSelectorLoop.java 21 Oct 2003 23:16:23 - 1.57 @@ -51,9 +51,10 @@ will be used. We can't however ignore those users on 100MBit connections ;-)) */ protected static final int MAX_CONC_CHANNELS=20; - public ReadSelectorLoop(Bandwidth bw) throws IOException{ + public ReadSelectorLoop(Bandwidth bw, int timerGranularity) + throws IOException{ - super(bw); + super(bw, timerGranularity); //create the buffer stuff bufferMap = new HashMap(MAX_CONC_CHANNELS); // buffers = new ByteBuffer[MAX_CONC_CHANNELS]; Index: ThrottledSelectorLoop.java === RCS file: /cvsroot/freenet/freenet/src/freenet/transport/ThrottledSelectorLoop.java,v retrieving revision 1.25 retrieving revision 1.26 diff -u -w -r1.25 -r1.26 --- ThrottledSelectorLoop.java 14 Oct 2003 19:41:38 - 1.25 +++ ThrottledSelectorLoop.java 21 Oct 2003 23:16:23 - 1.26 @@ -58,11 +58,14 @@ // sync on this to prevent SelectorLoop thread from un-throttling protected Bandwidth bw; + protected int timerGranularity; -public ThrottledSelectorLoop(Bandwidth bw) throws IOException { +public ThrottledSelectorLoop(Bandwidth bw, int timerGranularity) + throws IOException { this.bw = bw; throttleDisabledQueue = new LinkedList(); + this.timerGranularity = timerGranularity; //rand = freenet.Core.randSource; } Index: WriteSelectorLoop.java === RCS file: /cvsroot/freenet/freenet/src/freenet/transport/WriteSelectorLoop.java,v retrieving revision 1.62 retrieving revision 1.63 diff -u -w -r1.62 -r1.63 --- WriteSelectorLoop.java 18 Oct 2003 00:17:57 - 1.62 +++ WriteSelectorLoop.java 21 Oct 2003 23:16:23 - 1.63 @@ -51,12 +51,24 @@ private static final int TABLE_SIZE=512; private static final float TABLE_FACTOR=(float)0.6; + int minSendBytesPerThrottleCycle; + private static final int GRANULES_PER_THROTTLE_CYCLE = 20; + /** * nothing special about this constructor */ - public WriteSelectorLoop(Bandwidth bw) throws IOException { + public WriteSelectorLoop(Bandwidth bw, int timerGranularity) + throws IOException { - super(bw); + super(bw, timerGranularity); + minSendBytesPerThrottleCycle = + (bw.currentBandwidthPerSecondAllowed() * +GRANULES_PER_THROTTLE_CYCLE * +timerGranularity) / 1000; + Core.logger.log(this, "Minimum send size: "+ + minSendBytesPerThrottleCycle+" per "+ + (GRANULES_PER_THROTTLE_CYCLE * timerGranularity)+ + "ms", Logger.MINOR); jobs = new BlockingQueue(); uniqueness = new Hashtable(TABLE_SIZE,TABLE_FACTOR); sorter=new QuickSorter(); @@ -68,6 +80,7 @@ public WriteSelectorLoop() throws IOException { super(); + minSendBytesPerThrottleCycle = Integer.MAX_VALUE; jobs = new BlockingQueue(); uniqueness = new Hashtable(TABLE_SIZE,TABLE_FACTOR); sorter=new QuickSorter(); @@ -474,10 +487,6 @@ } } - //check if any of the channels got closed; - //REDFLAG: the behavior of select() on channels that have - //been closed remotely is not tested. - //TEST PROPERLY AND IMPLEMENT CHECKS HERE!!! protected final boolean inspectChannels() { if(logDebug) Core.logger.log(this, "inspectChannels()", Logger.DEBUG); @@ -513,7 +522,6 @@ try{
[freenet-CVS] freenet/src/freenet/support/io NIOInputStream.java, 1.23, 1.24
Update of /cvsroot/freenet/freenet/src/freenet/support/io In directory sc8-pr-cvs1:/tmp/cvs-serv16093/src/freenet/support/io Modified Files: NIOInputStream.java Log Message: 6264: MAJOR improvement to low level bandwidth limiting... set a minimum send size, of the bandwidth that would be used in 20 timer granules, (20ms on Sun JVMs...), and send that many bytes if possible on each throttle cycle. Logging, including the interesting objects dump now includes memory stats. Index: NIOInputStream.java === RCS file: /cvsroot/freenet/freenet/src/freenet/support/io/NIOInputStream.java,v retrieving revision 1.23 retrieving revision 1.24 diff -u -w -r1.23 -r1.24 --- NIOInputStream.java 15 Oct 2003 21:15:49 - 1.23 +++ NIOInputStream.java 21 Oct 2003 23:16:22 - 1.24 @@ -68,7 +68,8 @@ public void registered(){ registered = true; - if(logDEBUG) Core.logger.log(this, "Registered "+this, Logger.DEBUG); + if(logDEBUG) Core.logger.log(this, "Registered "+this, +new Exception("debug"), Logger.DEBUG); synchronized(regLock) { regLock.notifyAll(); } @@ -289,7 +290,7 @@ if (alreadyClosedLink) return -1; accumulator.wait(5*60*1000); if (System.currentTimeMillis() - now >= 5*60*1000) { - Core.logger.log(this, "waited more than 5 minutes in NIOIS.read() "+conn+":"+this+"- closing",Logger.MINOR); + Core.logger.log(this, "waited more than 5 minutes in NIOIS.read() "+conn+":"+this+"- closing", new Exception("debug"), Logger.NORMAL); close(); return -1; } ___ cvs mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/cvs
Re: [freenet-dev] Node freezes for 4 seconds?
Well... we tried async GC, it was *really* slow. Zab did anyway. On Tue, Oct 21, 2003 at 10:59:44PM +0200, [EMAIL PROTECTED] wrote: > >While tracking down node nonresponsiveness, I have found mysterious gaps > >of 4 (or sometimes 3) seconds in my log, rather frequently. In one > >instance I saw two log messages at 8:37:19 sandwiching a log message > >=66rom 8:27:15. A reasonable hypothesis is that this is caused by the > >whole node freezing from 8:37:15 to 8:37:19 - the earlier message had > >constructed the time string but had not yet enqueued. Also, this is > >disturbingly common... The next freeze was from 8:37:19 to 8:37:22! > >The node had been running for some hours, and it's memory footprint was > >321MB virtual, 277MB resident, according to top, with -Xmx300m. So maybe > >it is garbage collection? Garbage collection is monolithic IIRC, and > >locks the whole JVM... > > i would say the GC is a possible candidate for those time-gaps, too. > > while programming a realtime-java-server-application, where you'd notice gaps of a > 1/4 second, i experienced some time-gaps, too. > i had to "tune" the GC-settings to prevent gaps arising when the GC is doing its > work (you can see it with 'top': a single thread with a lifetime same as the JVM > climbing to first place, working some seconds and > then > eventually drowning below the other threads) > my solution was rather easy: allocate only a small minimum heap (24M), so the GC > runs rather often but only for a very short time so you'd only get small perfomance > "bumps" without significance. > alas, fred is not so memory friendly, so you'd want to experiemt with the > GC-settings. > > i'm not sure the GC is locking the whole JVM, as it is written in the 1.1.7 (!) spec: > "-noasyncgc > Turns off asynchronous garbage collection. When activated no garbage collection > takes place unless it is explicitly called or the program runs out of memory. > Normally garbage collection runs as an asynchronous > thread in parallel with other threads." > this sounds like the GC is parallel by default. if the behaviour has changed to > current 1.4.x* JVMs is not clear, as this parameter has been removed > > but they have gained a new option (from sun JVM 1.4.2 docs): > "-Xincgc > Enable the incremental garbage collector. The incremental garbage collector, which > is off by default, will eliminate occasional garbage-collection pauses during > program execution. However, it can lead to a > roughly > 10% decrease in overall GC performance." > > > > > > > ___ > Devl mailing list > [EMAIL PROTECTED] > http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl -- Matthew J Toseland - [EMAIL PROTECTED] Freenet Project Official Codemonkey - http://freenetproject.org/ ICTHUS - Nothing is impossible. Our Boss says so. signature.asc Description: Digital signature ___ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
Re: [freenet-dev] Node freezes for 4 seconds?
> i would say the GC is a possible candidate for those time-gaps, too. > > while programming a realtime-java-server-application, where you'd notice gaps of a 1/4 second, i experienced some time-gaps, too. > i had to "tune" the GC-settings to prevent gaps arising when the GC is doing its work (you can see it with 'top': a single thread with a lifetime same as the JVM climbing to first place, working some seconds and > then > eventually drowning below the other threads) > my solution was rather easy: allocate only a small minimum heap (24M), so the GC runs rather often but only for a very short time so you'd only get small perfomance "bumps" without significance. > alas, fred is not so memory friendly, so you'd want to experiemt with the GC-settings. So, what we need to do is reduce memory thrashing as much as possible to possibly avoid this kind of problem?!.. /N ___ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
Re: [freenet-dev] Node freezes for 4 seconds?
>While tracking down node nonresponsiveness, I have found mysterious gaps >of 4 (or sometimes 3) seconds in my log, rather frequently. In one >instance I saw two log messages at 8:37:19 sandwiching a log message >=66rom 8:27:15. A reasonable hypothesis is that this is caused by the >whole node freezing from 8:37:15 to 8:37:19 - the earlier message had >constructed the time string but had not yet enqueued. Also, this is >disturbingly common... The next freeze was from 8:37:19 to 8:37:22! >The node had been running for some hours, and it's memory footprint was >321MB virtual, 277MB resident, according to top, with -Xmx300m. So maybe >it is garbage collection? Garbage collection is monolithic IIRC, and >locks the whole JVM... i would say the GC is a possible candidate for those time-gaps, too. while programming a realtime-java-server-application, where you'd notice gaps of a 1/4 second, i experienced some time-gaps, too. i had to "tune" the GC-settings to prevent gaps arising when the GC is doing its work (you can see it with 'top': a single thread with a lifetime same as the JVM climbing to first place, working some seconds and then eventually drowning below the other threads) my solution was rather easy: allocate only a small minimum heap (24M), so the GC runs rather often but only for a very short time so you'd only get small perfomance "bumps" without significance. alas, fred is not so memory friendly, so you'd want to experiemt with the GC-settings. i'm not sure the GC is locking the whole JVM, as it is written in the 1.1.7 (!) spec: "-noasyncgc Turns off asynchronous garbage collection. When activated no garbage collection takes place unless it is explicitly called or the program runs out of memory. Normally garbage collection runs as an asynchronous thread in parallel with other threads." this sounds like the GC is parallel by default. if the behaviour has changed to current 1.4.x* JVMs is not clear, as this parameter has been removed but they have gained a new option (from sun JVM 1.4.2 docs): "-Xincgc Enable the incremental garbage collector. The incremental garbage collector, which is off by default, will eliminate occasional garbage-collection pauses during program execution. However, it can lead to a roughly 10% decrease in overall GC performance." ___ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
[freenet-dev] Node freezes for 4 seconds?
While tracking down node nonresponsiveness, I have found mysterious gaps of 4 (or sometimes 3) seconds in my log, rather frequently. In one instance I saw two log messages at 8:37:19 sandwiching a log message from 8:27:15. A reasonable hypothesis is that this is caused by the whole node freezing from 8:37:15 to 8:37:19 - the earlier message had constructed the time string but had not yet enqueued. Also, this is disturbingly common... The next freeze was from 8:37:19 to 8:37:22! The node had been running for some hours, and it's memory footprint was 321MB virtual, 277MB resident, according to top, with -Xmx300m. So maybe it is garbage collection? Garbage collection is monolithic IIRC, and locks the whole JVM... -- Matthew J Toseland - [EMAIL PROTECTED] Freenet Project Official Codemonkey - http://freenetproject.org/ ICTHUS - Nothing is impossible. Our Boss says so. signature.asc Description: Digital signature ___ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
Re: [freenet-dev] 6263 Status
On Tue, Oct 21, 2003 at 03:16:17AM -0700, Mike Stump wrote: > 6263 seems to have reasonable send queue numbers... > > It continues with nice low CPU load, which is very good. > > The bumping the load up to 1.0F is wrong. Would be better to do: > > load = MAX (sentbytes/limitbytes, jobs/maxjobs, bla/maxbla); > > This way, one can watch it approach 1.0, instead of it just slamming > into 1.0 every now and then. Not a big issue... > > hops since reset continues to be really high, 1.4 or so. I still want > to see it trend up to a higher number. You mean really low? > > None of the bookmarks work, well, except for the help index. :-( > > > And now for some Crazy Ideas(tm): > > Time to work on time based keys, or whatever? I'd really rather have > whatever content my node can find, and it has found them all before, > and has them on hard disk. It is stupid to not display them. I don't > care if it has to remember the date part of the key in a flat file > somewhere: > > [EMAIL PROTECTED]// --> 23453 > > and then try the current date, and when that doesn't work immediately > (1 second or so), fall back to the more recent one, queuing the lookup > for the current one. Let the user hit reload if they want to see the > most recent. This will provide a better user experience, something > freenet is lacking. The proposed solution is TUKs. They will be implemented eventually, but we have more pressing concerns at the moment. > > Also, I'd still like to see caching of successful answers to Qs and > populate the store with such things. A store of at least 100,000 of > them (configurable), with a max timeout of, say one week or one month > (configurable). Coupled with, when a node gets loaded, I'd like to > see Qs answered with pointers to people we last gave data to. The > idea is for massively popular data to chase down to the last few > people that handed out the data, a la bit-torrent, so that we can > shift load effortlessly from very overloaded nodes to node that might > not have any other keys at all on them (and hence, no one even Qing > them). This would be done with a certain probability, so that content > doesn't disappear just because the last person that grabbed it did. > Maybe 0.5 to start with. > > The idea is to improve the networks ability to find the data, quickly, > accurately, and on nodes that aren't overloaded and that do have the > data, reducing the amount of DNFs, reducing the amount of Qs that get > QRs. If those nodes don't want the load, they can pass of the data to > some other node, and return answers to the new end of the line. This would of course use up our bandwidth as well as theirs... and they could well be transient, or not have the data any more. How is this beneficial relative to routing? Wouldn't it allow an attacker to obtain hits for a given file far too easily? > > > Also, love to see a few nodes (1/200 to 1/10,000 of them) that have > lots of bw, threads and open connections, run up to 2,000-10,000 > connections and never do source reset or cache anything and just > accept in queries and never QR anything. They should just let the > request timeout on the client when the loading gets to be too much. > If this state persists, they should ask a high uptime, high bw client > to become a super node, and peer with them and move half of the > existing connections to the new super node. Every node would connect > to exactly one more-super node. A super node should connect to > roughly the same depth of other nodes. So, for example, the first > super node up from the grunts would have all grunts (around 200-10,000 > of them) plus 1 connection to a more super node. This node would have > 200-10,000 super nodes which each have 200-1 grunts, or > 40,000-100,000,000 grunts. And so on... at a HTL around 10, we hit a > network size of 1 grunts. You > can think of this as a routing of last resort if you want, though. If > freenet normally finds the data, great, but when it can't, send the Q > to the super node. I predict that the super node will have a 100% hit > rate and a tSuccessSearch in the <1s range (assuming positive > caching). Personally, I think it'd work so well, that we'd rather go > to the super node first and then if the data isn't found via the super > node, we can always do a normal style freenet lookup. We could have > fred keep tabs on how well the super node is working. If it works, > great, and if it doesn't, don't use it. Yawn. We have yet to establish that freenet routing does not work. When and if we do, we can consider doing such things, but they will have a considerable cost in attack resistance. NGRouting is quite capable of finding slow but reliable nodes, or fast but unreliable nodes, or ubernodes that are both, and routing to them when appropriate. > > The outbound requests out of a super node to the next higher super > node can be just those requests that are the most popular
Re: [freenet-dev] 6263 Status
On Tuesday 21 October 2003 05:16, Mike Stump wrote: > Time to work on time based keys, or whatever? I'd really rather have > whatever content my node can find, and it has found them all before, > and has them on hard disk. It is stupid to not display them. I don't > care if it has to remember the date part of the key in a flat file > somewhere: > > [EMAIL PROTECTED]// --> 23453 > > and then try the current date, and when that doesn't work immediately > (1 second or so), fall back to the more recent one, queuing the lookup > for the current one. Let the user hit reload if they want to see the > most recent. This will provide a better user experience, something > freenet is lacking. Have you read the description of TUKs. That really sounds like th solution. (it's in CVS somewhere) > Also, I'd still like to see caching of successful answers to Qs and > populate the store with such things. A store of at least 100,000 of > them (configurable), with a max timeout of, say one week or one month > (configurable). Coupled with, when a node gets loaded, I'd like to > see Qs answered with pointers to people we last gave data to. The > idea is for massively popular data to chase down to the last few > people that handed out the data, a la bit-torrent, so that we can > shift load effortlessly from very overloaded nodes to node that might > not have any other keys at all on them (and hence, no one even Qing > them). This would be done with a certain probability, so that content > doesn't disappear just because the last person that grabbed it did. > Maybe 0.5 to start with. I proposed doing that a while back. However I later realized that it would not work. This is because you need to distinguish which way the request is going. IE suppose you send the request back down the chain and the next node has forgotten about that request. Then you have just backtracked, so you are one node further than the data. Also even if it works correctly, you are very likely to run out of HTL before you reach the requesting node. This is because the path you are traveling is almost the full length of their request plus the number of hops it took you to get there. (Also it is more likely to find a previous requester the close you get to the data.) However I do agree that making use of this information would be a good idea. Read some of the recent discussion on chat. I'm planning on putting together a paper on how to properly do this. > The idea is to improve the networks ability to find the data, quickly, > accurately, and on nodes that aren't overloaded and that do have the > data, reducing the amount of DNFs, reducing the amount of Qs that get > QRs. If those nodes don't want the load, they can pass of the data to > some other node, and return answers to the new end of the line. > > > Also, love to see a few nodes (1/200 to 1/10,000 of them) that have > lots of bw, threads and open connections, run up to 2,000-10,000 > connections and never do source reset or cache anything and just > accept in queries and never QR anything. They should just let the > request timeout on the client when the loading gets to be too much. > If this state persists, they should ask a high uptime, high bw client > to become a super node, and peer with them and move half of the > existing connections to2 the new super node. Every node would connect > to exactly one more-super node. A super node should connect to > roughly the same depth of other nodes. So, for example, the first > super node up from the grunts would have all grunts (around 200-10,000 > of them) plus 1 connection to a more super node. This node would have > 200-10,000 super nodes which each have 200-1 grunts, or > 40,000-100,000,000 grunts. And so on... at a HTL around 10, we hit a > network size of 1 grunts. You > can think of this as a routing of last resort if you want, though. If > freenet normally finds the data, great, but when it can't, send the Q > to the super node. I predict that the super node will have a 100% hit > rate and a tSuccessSearch in the <1s range (assuming positive > caching). Personally, I think it'd work so well, that we'd rather go > to the super node first and then if the data isn't found via the super > node, we can always do a normal style freenet lookup. We could have > fred keep tabs on how well the super node is working. If it works, > great, and if it doesn't, don't use it. That is one of the good things about Freenet. If you want to make a super node, get a Fast computer and a LOT of bandwith and run Freenet on it. If you want to do this, go for it. To an extent I think this is a good idea. Suppose you and a bunch of other people what fast access to Freenet. You can ether each spend a lot of money on hardware and a decently fast connection and leave you PCs up all the time, or you could pool your money and buy one really good server, and all access it. Because bandwith is cheaper
Re: [freenet-dev] Targetted DOS in DHTs
On Mon, 20 Oct 2003, Edward J. Huff wrote: > On Mon, 2003-10-20 at 16:01, Todd Walton wrote: > > > > We used to have a list, tech@, where this kind of discussion would go, but > > it fell to the reaper. > > > Now we use chat for that purpose, since it gets no other traffic... Oh. Well, chat isn't on the mailing list page either. -todd ___ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
[freenet-dev] Re: JDK
Todd Walton schrieb: Is it correct that Sun's JDK 1.4.2 is not recommended for use with Freenet? 1.4.1 is preferred at this time, correct? Hi, I'm currently running the 6263 Build with the 1.4.2_02-b03 JRE on a Win2k machine and it seems to work quite well after some hours. No error entrys in the Log and it seems like it needs a little less ram, but this could also be due the 6263 Build. It would be very interesting to hear which JVM the devs recommend at the time, because the recommendation of 1.4.1 was even before 1.4.2_01 was out. So things may have changed. -todd Greets Matthias ___ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
[freenet-CVS] freenet/src/freenet PeerHandler.java, 1.35, 1.36 OpenConnectionManager.java, 1.138, 1.139 ConnectionHandler.java, 1.191, 1.192
Update of /cvsroot/freenet/freenet/src/freenet In directory sc8-pr-cvs1:/tmp/cvs-serv29746/src/freenet Modified Files: PeerHandler.java OpenConnectionManager.java ConnectionHandler.java Log Message: Made the OCM peerHandler mode display messages received stats. Index: PeerHandler.java === RCS file: /cvsroot/freenet/freenet/src/freenet/PeerHandler.java,v retrieving revision 1.35 retrieving revision 1.36 diff -u -w -r1.35 -r1.36 --- PeerHandler.java21 Oct 2003 09:40:46 - 1.35 +++ PeerHandler.java21 Oct 2003 11:04:52 - 1.36 @@ -83,6 +83,14 @@ { return messagesSendsFailed; } + public long getMessagesReceived() + { + return messagesReceived; + } + public long getMessgesReceiveFailed() + { + return 0; //TODO: Fix this or removed it? + } public long getDataSent(){ return dataSent; } @@ -925,9 +933,13 @@ public static final int CONNECTION_ATTEMPTS = 25; //DONE public static final int CONNECTION_SUCCESSES = 26; //DONE public static final int CONNECTION_SUCCESS_RATIO = 27; //DONE - public static final int MESSAGES_HANDLED_SUCCESSFULLY = 28; //DONE - public static final int MESSAGES_HANDLED_FAILED = 29; //DONE - public static final int MESSAGES_HANDLED_COMBINED = 30; //DONE + public static final int MESSAGES_SENT_SUCCESSFULLY = 28; //DONE + public static final int MESSAGES_SENDFAILURE = 29; //DONE + public static final int MESSAGES_SENT_COMBINED = 30; //DONE + public static final int MESSAGES_RECEIVED_SUCCESSFULLY = 31; //DONE + public static final int MESSAGES_RECEIVEFAILURE = 32; //DONE + public static final int MESSAGES_RECEIVED_COMBINED = 33; //DONE + public static final int MESSAGES_HANDLED_COMBINED = 34; //DONE private int iCompareMode = UNORDERED; @@ -1035,12 +1047,32 @@ return secondaryCompare(iSign, new Long(ph1.totalOutboundSuccesses).compareTo(new Long(ph2.totalOutboundSuccesses)), ph1, ph2); case CONNECTION_SUCCESS_RATIO : return secondaryCompare(iSign, new Float(ph1.getOutboundConnectionSuccessRatio()).compareTo(new Float(ph2.getOutboundConnectionSuccessRatio())), ph1, ph2); - case MESSAGES_HANDLED_SUCCESSFULLY : + case MESSAGES_SENT_SUCCESSFULLY : return secondaryCompare(iSign, new Long(ph1.getMessageAccounter().getMessgesSent()).compareTo(new Long(ph2.getMessageAccounter().getMessgesSent())), ph1, ph2); - case MESSAGES_HANDLED_FAILED : + case MESSAGES_SENDFAILURE : return secondaryCompare(iSign, new Long(ph1.getMessageAccounter().getMessgeSendsFailed()).compareTo(new Long(ph2.getMessageAccounter().getMessgeSendsFailed())), ph1, ph2); + case MESSAGES_SENT_COMBINED : + { + MessageAccounter m1 = ph1.getMessageAccounter(); + MessageAccounter m2 = ph2.getMessageAccounter(); + return secondaryCompare(iSign, new Long(m1.getMessgesSent()+m1.getMessgeSendsFailed()).compareTo(new Long(m2.getMessgesSent()+m2.getMessgeSendsFailed())), ph1, ph2); + } + case MESSAGES_RECEIVED_SUCCESSFULLY : + return secondaryCompare(iSign, new Long(ph1.getMessageAccounter().getMessagesReceived()).compareTo(new Long(ph2.getMessageAccounter().getMessagesReceived())), ph1, ph2); + case MESSAGES_RECEIVEFAILURE : + return secondaryCompare(iSign, new Long(ph1.getMessageAccounter().getMessgesReceiveFailed()).compareTo(new Long(ph2.getMessageAccounter().getMessgesReceiveFailed())), ph1, ph2); + case MESSAGES_RECEIVED_COMBINED : + { + MessageAccounter m1 = ph1.getMessageAccounter(); + MessageAccounter m2 = ph2.getMessageAccounter(); + return secondaryCompare(iSign, new Long(m1.getMessagesReceived()+m1.getMessgesReceiveFailed()).compareTo(new Long(m2.getMessagesReceived()+
[freenet-dev] 6263 Status
6263 seems to have reasonable send queue numbers... It continues with nice low CPU load, which is very good. The bumping the load up to 1.0F is wrong. Would be better to do: load = MAX (sentbytes/limitbytes, jobs/maxjobs, bla/maxbla); This way, one can watch it approach 1.0, instead of it just slamming into 1.0 every now and then. Not a big issue... hops since reset continues to be really high, 1.4 or so. I still want to see it trend up to a higher number. None of the bookmarks work, well, except for the help index. :-( And now for some Crazy Ideas(tm): Time to work on time based keys, or whatever? I'd really rather have whatever content my node can find, and it has found them all before, and has them on hard disk. It is stupid to not display them. I don't care if it has to remember the date part of the key in a flat file somewhere: [EMAIL PROTECTED]// --> 23453 and then try the current date, and when that doesn't work immediately (1 second or so), fall back to the more recent one, queuing the lookup for the current one. Let the user hit reload if they want to see the most recent. This will provide a better user experience, something freenet is lacking. Also, I'd still like to see caching of successful answers to Qs and populate the store with such things. A store of at least 100,000 of them (configurable), with a max timeout of, say one week or one month (configurable). Coupled with, when a node gets loaded, I'd like to see Qs answered with pointers to people we last gave data to. The idea is for massively popular data to chase down to the last few people that handed out the data, a la bit-torrent, so that we can shift load effortlessly from very overloaded nodes to node that might not have any other keys at all on them (and hence, no one even Qing them). This would be done with a certain probability, so that content doesn't disappear just because the last person that grabbed it did. Maybe 0.5 to start with. The idea is to improve the networks ability to find the data, quickly, accurately, and on nodes that aren't overloaded and that do have the data, reducing the amount of DNFs, reducing the amount of Qs that get QRs. If those nodes don't want the load, they can pass of the data to some other node, and return answers to the new end of the line. Also, love to see a few nodes (1/200 to 1/10,000 of them) that have lots of bw, threads and open connections, run up to 2,000-10,000 connections and never do source reset or cache anything and just accept in queries and never QR anything. They should just let the request timeout on the client when the loading gets to be too much. If this state persists, they should ask a high uptime, high bw client to become a super node, and peer with them and move half of the existing connections to the new super node. Every node would connect to exactly one more-super node. A super node should connect to roughly the same depth of other nodes. So, for example, the first super node up from the grunts would have all grunts (around 200-10,000 of them) plus 1 connection to a more super node. This node would have 200-10,000 super nodes which each have 200-1 grunts, or 40,000-100,000,000 grunts. And so on... at a HTL around 10, we hit a network size of 1 grunts. You can think of this as a routing of last resort if you want, though. If freenet normally finds the data, great, but when it can't, send the Q to the super node. I predict that the super node will have a 100% hit rate and a tSuccessSearch in the <1s range (assuming positive caching). Personally, I think it'd work so well, that we'd rather go to the super node first and then if the data isn't found via the super node, we can always do a normal style freenet lookup. We could have fred keep tabs on how well the super node is working. If it works, great, and if it doesn't, don't use it. The outbound requests out of a super node to the next higher super node can be just those requests that are the most popular from among all the clients that hasn't been found yet or that hasn't been updated in a while. This is not unlike the current problem of an unpopular bit of data on one node in all of freenet not being found, because you can't do a HTL of 5000 to find the data, so I don't think such a ranking would hurt, also, to the extent it does hurt fred will try a normal lookup in the next step anyway. I think this would remove a ton of load from freenet and make it more efficient while not compromising anything. We treat the center as transparent and have the edges do things like data source reset and sending the request to another grunt a certain amount of the time (say 50% of the time). Also, any data that could be collected using the mechanism can already be collected by having someone that is interested in such data build a network of core super nodes without telling anyone. If they manage to find data quickly and accurately all the time, fre
[freenet-CVS] freenet/src/freenet PeerHandler.java, 1.34, 1.35 ConnectionHandler.java, 1.190, 1.191
Update of /cvsroot/freenet/freenet/src/freenet In directory sc8-pr-cvs1:/tmp/cvs-serv17681/src/freenet Modified Files: PeerHandler.java ConnectionHandler.java Log Message: Only count succeeded messages in the 'messages' member in ConnectionHandler Extracted a couple of code fragments in ConnectionHandler.innerProcess() into private helper methods. Made ConnectionHandler notify PeerHandler of received messages, used for accounting of messagetypes in PH Index: PeerHandler.java === RCS file: /cvsroot/freenet/freenet/src/freenet/PeerHandler.java,v retrieving revision 1.34 retrieving revision 1.35 diff -u -w -r1.34 -r1.35 --- PeerHandler.java21 Oct 2003 08:30:11 - 1.34 +++ PeerHandler.java21 Oct 2003 09:40:46 - 1.35 @@ -36,14 +36,14 @@ private static class messageTypeAndStatus { - public static String toString(PeerPacketMessage msg,boolean success){ - return msg.msg.getMessageName()+String.valueOf(success); + public static String toString(Message msg,boolean success){ + return msg.getMessageName()+String.valueOf(success); } } private static class myInt{ int intValue= 0;} //Stupid support class for mapping to an int protected void registerMessageSent(PeerPacketMessage m, boolean success) { - String key = messageTypeAndStatus.toString(m, success); + String key = messageTypeAndStatus.toString(m.msg, success); synchronized (messagesSentByTypeAndStatus) { myInt count = (myInt) messagesSentByTypeAndStatus.get(key); if (count == null) { @@ -59,7 +59,7 @@ messagesSendsFailed++; } } - protected void registerMessageReceived(PeerPacketMessage m) { + protected void registerMessageReceived(Message m) { String key = messageTypeAndStatus.toString(m, true); //Keep this worthless boolean to let the user worry about ONE type of HashMap key instead of two synchronized (messagesReceivedByTypeAndStatus) { myInt count = (myInt) messagesReceivedByTypeAndStatus.get(key); @@ -71,7 +71,7 @@ } //Update the message count cache messagesReceived++; - dataReceived += m.getLength(); + //dataReceived += m.getLength(); //TODO:Fix this } @@ -224,8 +224,7 @@ */ public void unregisterConnectionHandler(ConnectionHandler ch) { if(Core.logger.shouldLog(Logger.MINOR,this)) - Core.logger.log(this, "Unregistering "+ch+" on "+this, - Logger.MINOR); + Core.logger.log(this, "Unregistering "+ch+" on "+this,Logger.MINOR); boolean notInRT = id == null || !(node.rt.references(id)); boolean notContactable = (ref == null || ref.noPhysical()) && notInRT; @@ -255,11 +254,8 @@ public void removeFromOCM() { removingFromOCM = true; - Core.logger.log(this, "Removing from OCM... "+this, - Logger.DEBUG); - SendFailedException sfe = - new SendFailedException(null, id, "Removing from OCM", - true); + Core.logger.log(this, "Removing from OCM... " + this, Logger.DEBUG); + SendFailedException sfe = new SendFailedException(null, id, "Removing from OCM", true); synchronized(messages) { if(!messages.isEmpty() || (!messagesWithTrailers.isEmpty())) Core.logger.log(this, "Lost all connections for "+id+ @@ -286,8 +282,7 @@ removedFromOCM = true; removedFromOCMLock.notifyAll(); } - Core.logger.log(this, "Removed from OCM: "+this, - Logger.DEBUG); + Core.logger.log(this, "Removed from OCM: " + this, Logger.DEBUG); } public void waitForRemovedFromOCM() { @@ -314,8 +309,7 @@ new Exception("debug"), Logger.DEBUG); if(nr.supersedes(ref)) { if(logDEBUG) - Core.logger.log(this, "superced
[freenet-CVS] freenet/src/freenet PeerPacketMessage.java,1.13,1.14
Update of /cvsroot/freenet/freenet/src/freenet In directory sc8-pr-cvs1:/tmp/cvs-serv8245/src/freenet Modified Files: PeerPacketMessage.java Log Message: Indenting Index: PeerPacketMessage.java === RCS file: /cvsroot/freenet/freenet/src/freenet/PeerPacketMessage.java,v retrieving revision 1.13 retrieving revision 1.14 diff -u -w -r1.13 -r1.14 --- PeerPacketMessage.java 20 Oct 2003 12:53:24 - 1.13 +++ PeerPacketMessage.java 21 Oct 2003 08:36:19 - 1.14 @@ -28,23 +28,24 @@ public String toString() { - return super.toString() + ":" + msg + ":" + raw + ":" + cb + - ":" + finished; + return super.toString() + ":" + msg + ":" + raw + ":" + cb + ":" + finished; } -public PeerPacketMessage(Identity i, Message msg, MessageSendCallback cb, -int priority, long expires, PeerHandler ph) { + public PeerPacketMessage(Identity i, Message msg, MessageSendCallback cb, int priority, long expires, PeerHandler ph) { startTime = System.currentTimeMillis(); if(expires > 0) expiryTime = startTime + expires; - else expiryTime = 0; + else + expiryTime = 0; this.id = i; this.msg = msg; - if(msg == null) throw new NullPointerException(); + if (msg == null) + throw new NullPointerException(); this.cb = cb; this.priority = priority; this.ph = ph; - if(ph == null) throw new NullPointerException(); + if (ph == null) + throw new NullPointerException(); } /** Set the message up to send on a connection using a specific @@ -56,8 +57,7 @@ */ public void resolve(Presentation p) { if(Core.logger.shouldLog(Logger.DEBUG, this)) { - Core.logger.log(this, "resolve("+p+") for "+this, - Logger.DEBUG); + Core.logger.log(this, "resolve(" + p + ") for " + this, Logger.DEBUG); } finished = false; if(this.p == p) @@ -78,9 +78,7 @@ this.content = bais.toByteArray(); } } catch (IOException e) { - Core.logger.log(this, "Impossible exception: "+e+ - " writing message "+raw+","+cb+ - " to BAIS", Logger.ERROR); + Core.logger.log(this, "Impossible exception: " + e + " writing message " + raw + "," + cb + " to BAIS", Logger.ERROR); throw new IllegalStateException("Impossible exception!: "+e); } } @@ -110,11 +108,9 @@ */ public void notifySuccess(TrailerWriter tw) { ph.registerMessageSent(this,true); - Core.logger.log(this, "notifySuccess("+tw+") for "+this, - Logger.DEBUG); + Core.logger.log(this, "notifySuccess(" + tw + ") for " + this, Logger.DEBUG); if(finished) { - Core.logger.log(this, "notifySuccess on "+this+" already finished!", - new Exception("debug"), Logger.MINOR); + Core.logger.log(this, "notifySuccess on " + this +" already finished!", new Exception("debug"), Logger.MINOR); return; } finished = true; @@ -124,36 +120,27 @@ if(sendTime > 5 * 60 * 1000) { long seconds = sendTime/1000; long secondsSinceConn = (sentTime - ph.lastRegisterTime)/1000; - Core.logger.log(this, "Took "+seconds+" seconds to send "+ - this+"(notifySuccess("+tw+ - ")! (last connection registered "+ - secondsSinceConn+" seconds ago on "+ph, - ph.probablyNotConnectable() ? Logger.MINOR : - Logger.NORMAL); + Core.logger.log(this, "Took " + seconds + " seconds to send " + this +"(notifySuccess(" + tw + ")! (last connection registered " + secondsSinceConn + " seconds ago on " + ph, ph.probablyNotConnectable() ? Logger.MINOR : Logger.NORMAL); } Core.diagnostics.occurrenceContinuous("messageSendTime", sendTime); if(ph.ref != null) - Core.diagnostics.occurrenceContinuous("messageSendTimeContactable", - sendTime); + Core.diagnostics.occurrenceContinuous("messageSendTimeContactable", sendTime); else - Core.diagnostics.occurrenceContinuous("messageSendTimeNonContactable", - sendTime); + Core.diagnostics.occurrenceContinuous("messageSendTimeNonContactable", sendTime); if(msg instanceof freenet.message.Request) - Core.diagnostics.occurrenceContinuous("messageSendTimeRequest", -
[freenet-CVS] freenet/src/freenet PeerHandler.java, 1.33, 1.34 OpenConnectionManager.java, 1.137, 1.138
Update of /cvsroot/freenet/freenet/src/freenet In directory sc8-pr-cvs1:/tmp/cvs-serv7582/src/freenet Modified Files: PeerHandler.java OpenConnectionManager.java Log Message: Stopped removing connections which might still be reading data from the PH, they will removed themselves later on when they are fully closed. Moved PH message accounting stuff into a class made for this, moved some getters from PH to the accounting class. Prepared for PH accounting of received messages. Changed some string-building code fragments from multiple lines to single line, makes the code easier to read. Index: PeerHandler.java === RCS file: /cvsroot/freenet/freenet/src/freenet/PeerHandler.java,v retrieving revision 1.33 retrieving revision 1.34 diff -u -w -r1.33 -r1.34 --- PeerHandler.java21 Oct 2003 06:43:11 - 1.33 +++ PeerHandler.java21 Oct 2003 08:30:11 - 1.34 @@ -24,10 +24,16 @@ final LinkedList messages; final LinkedList messagesWithTrailers; - Hashtable messagesHandledByTypeAndStatus = new Hashtable(); //Maps from String containing message name and status to number of that messagetype sent (class myInt) - long messagesSent = 0; //Can be figured out from the hash table above but kept here as a cache for fast access - long messagesReceived =0; - long messagesFailed = 0; //Can be figured out from the hash table above but kept here as a cache for fast access + public static class MessageAccounter + { + private Hashtable messagesSentByTypeAndStatus = new Hashtable(); //Maps from String containing message name and status to number of that messagetype sent (class myInt) + private Hashtable messagesReceivedByTypeAndStatus = new Hashtable(); //Maps from String containing message name and status to number of that messagetype received (class myInt) + private long messagesSent = 0; //Can be figured out from the hash table above but kept here as a cache for fast access + private long messagesSendsFailed = 0; //Can be figured out from the hash table above but kept here as a cache for fast access + private long messagesReceived =0; + protected long dataSent =0; //The number of bytes sent to this Peer + protected long dataReceived =0; //The number of bytes received from this Peer + private static class messageTypeAndStatus { public static String toString(PeerPacketMessage msg,boolean success){ @@ -35,10 +41,57 @@ } } private static class myInt{ int intValue= 0;} //Stupid support class for mapping to an int - protected long dataSent =0; //The number of bytes sent to this Peer - protected long dataReceived =0; //The number of bytes received from this Peer + protected void registerMessageSent(PeerPacketMessage m, boolean success) { + String key = messageTypeAndStatus.toString(m, success); + synchronized (messagesSentByTypeAndStatus) { + myInt count = (myInt) messagesSentByTypeAndStatus.get(key); + if (count == null) { + count = new myInt(); + messagesSentByTypeAndStatus.put(key, count); + } + count.intValue++; + } + if (success) { //Update the message count cache + messagesSent++; + dataSent += m.getLength(); //Should we count failed data too? + } else { + messagesSendsFailed++; + } + } + protected void registerMessageReceived(PeerPacketMessage m) { + String key = messageTypeAndStatus.toString(m, true); //Keep this worthless boolean to let the user worry about ONE type of HashMap key instead of two + synchronized (messagesReceivedByTypeAndStatus) { + myInt count = (myInt) messagesReceivedByTypeAndStatus.get(key); + if (count == null) { + count = new myInt(); + messagesReceivedByTypeAndStatus.put(key, count); + } + count.intValue++; + } + //Update the message count cache + messagesReceived++; + dataReceived += m.getLength(); + + } + + public long getMessgesSent() + { + return messagesSent; +
Re: [freenet-dev] Trimming replies
On Mon, Oct 20, 2003 at 06:28:45PM +0100, Dave Hooper wrote: > The reason most often given is that several of the developers use an email > client where this isn't necessary and in fact is discouraged (i.e. Mutt). > I've never used mutt, but if it encourages user laziness to the extent that > it inconveniences others then I personally wouldn't recommend it to anybody > and would consider revolting against its use in public circles. Hrm, I think you're a little quick to blame mutt as the problem. Of course, I switched to mutt specifically because it made toad's emails easier to read, so perhaps I am not objective on this point :-p. -- jj -- I'm sick and fucking tired of not getting people drunk. -- blixco, http://www.kuro5hin.org/story/2003/8/20/105121/869 pgp0.pgp Description: PGP signature ___ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl