Could be checkpoint.. BTW to speed up bulk load you may want to use
large log files located separately from data disks.

2009/2/27, Brian Peterson <dianeay...@verizon.net>:
> I have a big table that gets a lot of inserts. Rows are inserted 10k at a
> time with a table function. At around 2.5 million rows, inserts slow down
> from 2-7s to around 15-20s. The table's dat file is around 800-900M.
>
>
>
> I have durability set to "test", table-level locks, a primary key index and
> another 2-column index on the table. Page size is at the max and page cache
> set to 4500 pages. The table gets compressed (inplace) every 500,000 rows.
> I'm using Derby 10.4 with JDK 1.6.0_07, running on Windows XP. I've ruled
> out anything from the rest of the application, including GC (memory usage
> follows a consistent pattern during the whole load). It is a local file
> system. The database has a fixed number of tables (so there's a fixed number
> of dat files in the database directory the whole time). The logs are getting
> cleaned up, so there's only a few dat files in the log directory as well.
>
>
>
> Any ideas what might be causing the big slowdown after so many loads?
>
>
>
> Brian
>
>
>
>

Reply via email to