I have a big table that gets a lot of inserts. Rows are inserted 10k at a
time with a table function. At around 2.5 million rows, inserts slow down
from 2-7s to around 15-20s. The table's dat file is around 800-900M.
I have durability set to test, table-level locks, a primary key index and
Could be checkpoint.. BTW to speed up bulk load you may want to use
large log files located separately from data disks.
2009/2/27, Brian Peterson dianeay...@verizon.net:
I have a big table that gets a lot of inserts. Rows are inserted 10k at a
time with a table function. At around 2.5 million
Hi Amee,
Don't be discouraged by the EOL status of JDK1.4. You can still download
it. You just have to fill in a short form. Then you will receive an
email telling you where you can get the JDK. Here's the form you need to
fill out: https://www2.sun.de/dct/forms/reg_us_2402_929_0.jsp You can
The application is running on a client machine. I'm not sure how to
tell if there's a different disk available that I could log to.
If checkpoint is causing this delay, how to a manage that? Can I turn
checkpointing off? I already have durability set to test; I'm not
concerned about
I've increased the log size and the checkpoint interval, but it
doesn't seem to help.
It looks like the inserts begin to dramatically slow down once the table
reaches the initial allocation of pages. Things just fly along until it
gets to about 1100 pages (I've allocated an initial 1000
Ok,
For testing, if you allocate 2000 pages, then if my thinking is ok, then
you'll fly along until you get until 2100 pages.
It sounds like you're hitting a bit of a snag where after your initial
allocation of pages, Derby is only allocating a smaller number of pages at a
time.
I
I thought I read in the documentation that 1000 was the max initial pages
you could allocate, and after that, Derby allocates a page at a time. Is
there some other setting for getting it to allocate more at a time?
Brian
From: Michael Segel [mailto:mse...@segel.com] On Behalf Of