Hi,

I can now reproduce the problem. The database writes many checkpoints
unnecessarily, which slows down the operation. It does that because
the the log "file" (no it's no longer a file, it's a segment) is too
large. If there is an open transaction, it can't delete the old log
segment however, so that it will create a new segment for each 32 (by
default) sequences. I will fix that in the next release.

A workaround is to use a larger max_log_size or smaller transactions.

Regards,
Thomas

-- 
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/h2-database?hl=en.

Reply via email to