On Wed, May 6, 2009 at 11:29 AM, Matthew Toseland
<toad at amphibian.dyndns.org> wrote:
> IMHO the number of disk reads is simply a function of how much memory there
> is, how much can be used to cache the node.db4o by the operating system. We
> can do more caching at the JVM level, this will reduce CPU usage but not
> actual IOs hitting the disk, and will cost more heap memory.
>
> But we can reduce the number of disk writes further for downloads:
>
> Turn off scheduling by retry count. Schedule solely by priority then
> round-robin over the different request clients and requests then randomly
> over the individual SendableGet's: as we do now, but without the retry count
> layer.
>
> Only update the retry count for an individual block if MaxRetries != -1.
>
> Keep the cooldown queue entirely in RAM. Two structures so that we can select
> requests quickly:
>
> 1) A structure tracking all the keys fetched in the last half hour. For each
> key, the 3 times at which it was last fetched. When selecting requests, we
> would not send any request for a key in this list - at least not for
> persistent requests.
>
> 2) A structure tracking all the BaseSendableGet's sent in the last half hour.
> For each, record the start and finish times for the last 3 times it was
> fetched. When selecting requests, we can shortcut having to ask the first
> structure for each block by excluding a whole BaseSendableGet through this
> structure.
>
> Memory cost? Well, we do actually keep the first structure already, in the
> FailureTable ... some changes would be needed perhaps. The second structure
> would be new, and somewhat problematic, as it would keep these objects in
> memory even if they are deactivated. We could introduce a unique identifier
> or even use db4o's object IDs to avoid this (probably the best solution). A
> simple hashset of the objects themselves probably would not work well for
> activation reasons.
>
> Network performance cost? Well, what would be the impact of not scheduling by
> retry count any more? I dunno, any opinions?
>
> CPU cost? We might spend more time trying to find requests to run ... this
> might even result in more disk reads ... ? But on the whole it ought to be
> positive ...
>
> Code changes? Well, we could keep SplitFileFetcherSubSegment, and be backward
> compatible, it'd be a bit awkward though... eventually we'd want a new class
> merging SFFS and SFFSS...
>
> _______________________________________________
> Devl mailing list
> Devl at freenetproject.org
> http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl
>

RAM is cheap nowadays and if we make this an option then low memory
nodes can still trade disk io for RAM.  I give Freenet 512MB and don't
miss it.  The disk io, however is unacceptable and causes me to only
run Freenet when I am not using my PC since db4o was merged...

-- 
I may disagree with what you have to say, but I shall defend, to the
death, your right to say it. - Voltaire
Those who would give up Liberty, to purchase temporary Safety, deserve
neither Liberty nor Safety. - Ben Franklin

Reply via email to