Is this still an issue? I'm seeing something very much like this: a loader 
process that started out running quickly has now slowed dramatically, with 
75% of the cpu being given to PageStore.checkpoint() while fetching blocks 
of 100 ids from a sequence. 

I'm using 1.3.171.


On Thursday, March 11, 2010 9:34:17 PM UTC-8, ch...@schanck.net wrote:
>
> Thomas,
>
> That's excellent detective work; I was despairing of replicating it.
>
> I'll move the log size up significantly and let you know if that let's
> me get to the current version.
>
> Thanks a lot!
>
> Chris
>
> On Mar 11, 3:16 pm, Thomas Mueller <thomas.tom.muel...@gmail.com>
> wrote:
> > Hi,
> >
> > I can now reproduce the problem. The database writes many checkpoints
> > unnecessarily, which slows down the operation. It does that because
> > the the log "file" (no it's no longer a file, it's a segment) is too
> > large. If there is an open transaction, it can't delete the old log
> > segment however, so that it will create a new segment for each 32 (by
> > default) sequences. I will fix that in the next release.
> >
> > A workaround is to use a larger max_log_size or smaller transactions.
> >
> > Regards,
> > Thomas
>
>

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to h2-database+unsubscr...@googlegroups.com.
To post to this group, send email to h2-database@googlegroups.com.
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to