On Wednesday 06 May 2009 23:52:22 Juiceman wrote:
> 
> On Wed, May 6, 2009 at 12:50 PM, Juiceman  wrote:
> > On Wed, May 6, 2009 at 11:37 AM, Matthew Toseland
> >  wrote:
> >> On Wednesday 06 May 2009 14:43:59 Victor Denisov wrote:
> >>> Matthew Toseland wrote:
> >>> > This is the downside of db4o. If it is a widespread problem, we're 
gonna
> >> have
> >>> > to revert it. Which means throwing away more than 6 months work 
largely
> >>> > funded by Google's $18K.
> >>>
> >>> I think that using a database is a good idea (although I personally
> >>> would've opted for a relational database such as Derby). So I'd prefer
> >>> to try and understand and fix the issue rather than hiding from it :-).
> >>>
> >>> > My database queue is usually pretty empty, even with queued downloads, 
but
> >> I
> >>> > have 8G and fast mirrored disks...
> >>>
> >>> The problem's that Freenet *doesn't* even use the amount of memory I
> >>> provide it with (I'm yet to see it use more than 120 megs out of 320 I
> >>> allow for the heap). I'd be willing to dedicate as much memory as
> >>> required if only it'd help.
> >>>
> >>> My hard drives are nothing special - 250Gb 7200 RPM Seagate ones, 16 Mb
> >>> cache, SATA2, no NCQ - though definitely not the slowest out there. I
> >>> see ~35 Mb/s read speed and ~28 Mb/s write speed for medium-sized files
> >>> and ~5 Mb/s to 8 Mb/s for small files in the tests I'd done. I'll
> >>> probably have to test the same from inside Java to make absolutely sure
> >>> that it's not some weird JVM issue on my platform, though.
> >>>
> >>> > 2650 handles is strange, on unix we are generally limited to 1024 and
> >>> > generally we don't exceed that. Both of your problems may be caused by
> >> flaky
> >>> > hardware, but frankly we do need to run on flaky real world 
hardware. :|
> >>>
> >>> I don't have Freenet running right now, will check it later. But I2P is
> >>> using 2670 handles right now, and Azureus uses 1450 - so 2600 for
> >>> Freenet is definitely nothing out of the ordinary on Windows. Oh, and
> >>> the highest handle user on my machine is MySQL, which uses ~69000
> >>> handles and works absolutely fine :-).
> >>>
> >>> >> Same here. Enormous disk queues. I've also compared i/o counts with 
i/o
> >>> >> bytes read/written - that's how I know that i/o operations are small. 
In
> >>> >> the statistics screen, I routinely see 100+ outstanding database 
jobs.
> >>> >> It can't be good.
> >>> >
> >>> > This just confirms that disk I/O is the problem ... and almost 
certainly
> >>> > caused by db4o as it goes away if nothing is queued.
> >>>
> >>> My thinking exactly. Would providing you with a snapshot of CPU/memory
> >>> performance under YourKit Profiler (I have academic licenses for both
> >>> 7.5 and 8.0, IIRC) or VisualVM (which is now a part of the JDK
> >>> distributive) on my machine help? Any logging I can turn on to help?
> >>> BTW, I have logging set to ERROR for now, as with NORMAL level it logs
> >>> ~2Mb per minute, adding noticeably to overall disk contention.
> >>>
> >>> Regards,
> >>> Victor Denisov.
> >>
> >> One other thing, for both you and Juiceman:
> >> How's the CPU usage? Given how much RAM you have I would expect node.db4o 
to
> >> be cached in memory (how big is it?). But doing a read through the OS to 
the
> >> OS disk cache may cost a lot of CPU (context switch etc) ... Is there a 
lot
> >> of CPU usage for the freenet process? To the point that it might be the 
cause
> >> of the poor overall system performance? And how much CPU usage is system?
> >>
> >
> > Freenet CPU usage fluctuates between 2 and 27% of a quad core system.
> > The rest of the machine rarely uses more than 15% unless I am gaming,
> > then it still only hits 50%.  CPU usage is quite acceptable for now.
> > I have 3GB of RAM, 512 allocated to Freenet.
> 
> Node.db4o was 375 MB.  No uploads, 1 GB of queued downloads.
> 
> How often is this file written to?  Anyway to queue writes in a RAM
> buffer and write to disk periodically?

I don't think so, at least not easily i.e. not without a custom IoAdapter able 
to buffer many commits separately. What I don't understand is what all these 
writes are *for*. If it's just downloads, most of the time it should just be 
selecting a SplitFileFetcherSubSegment, fetching all the blocks in it 
(without accessing the database), updating them all at once when they've 
failed, and then selecting a new segment - roughly every 2 minutes.

However, I guess if most of the fetches succeed, that produces a lot more 
traffic. We have to write the block to disk when we fetch it, look up who 
owns it (because many fetchers can have a claim on one block), probably copy 
it, tell the SFFS and SFFSS about it, write the update to the SFFS, and then 
when we've got all the blocks for a segment do a load more work.

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
Support mailing list
Support@freenetproject.org
http://news.gmane.org/gmane.network.freenet.support
Unsubscribe at http://emu.freenetproject.org/cgi-bin/mailman/listinfo/support
Or mailto:support-requ...@freenetproject.org?subject=unsubscribe

Reply via email to