-Mike
> -Original Message-
> From: kristian.waa...@sun.com [mailto:kristian.waa...@sun.com]
> Sent: Tuesday, March 03, 2009 6:19 AM
> To: Derby Discussion
> Subject: Re: inserts slowing down after 2.5m rows
>
> Brian Peterson wrote:
> > I thought I read in the docum
*Sent:* Friday, February 27, 2009 9:59 PM
*To:* 'Derby Discussion'
*Subject:* RE: inserts slowing down after 2.5m rows
Ok,
For testing, if you allocate 2000 pages, then if my thinking is ok, then
you'll fly along until you get until 2100 pages.
It sounds like you'r
...@segel.com
Sent: Friday, February 27, 2009 9:59 PM
To: 'Derby Discussion'
Subject: RE: inserts slowing down after 2.5m rows
Ok,
For testing, if you allocate 2000 pages, then if my thinking is ok, then
you'll fly along until you get until 2100 pages.
It sounds like you'r
ime.
I would hope that you could configure the number of pages to be allocated in
blocks as the table grows.
_
From: publicay...@verizon.net [mailto:publicay...@verizon.net]
Sent: Friday, February 27, 2009 8:48 PM
To: Derby Discussion
Subject: Re: inserts slowing down after
I've increased the log size and the checkpoint interval, but it
doesn't seem to help.
It looks like the inserts begin to dramatically slow down once the table
reaches the initial allocation of pages. Things just fly along until it
gets to about 1100 pages (I've allocated an initial 1000 pag
The application is running on a client machine. I'm not sure how to
tell if there's a different disk available that I could log to.
If checkpoint is causing this delay, how to a manage that? Can I turn
checkpointing off? I already have durability set to test; I'm not
concerned about recover
Could be checkpoint.. BTW to speed up bulk load you may want to use
large log files located separately from data disks.
2009/2/27, Brian Peterson :
> I have a big table that gets a lot of inserts. Rows are inserted 10k at a
> time with a table function. At around 2.5 million rows, inserts slow dow