unless you specify otherwiise, every insert carries its own transaction
begin/commit. That's a lot of overhead for a single insert, no? Why
not use a single transaction for, say, each 1000 inserts? That would
strike a nice balance of security with efficiency.
pseudo code for the insert:
Begin
On Tue, 2007-02-06 at 01:35, Karen Hill wrote:
[snip] So far I've been sitting here for about 2 million ms
waiting for it to complete, and I'm not sure how many inserts postgres
is doing per second.
One way is to run analyze verbose on the target table and see how many
pages it has, and then
On Tue, 2007-02-06 at 10:40, Merlin Moncure wrote:
On 2/6/07, Scott Marlowe [EMAIL PROTECTED] wrote:
On Mon, 2007-02-05 at 18:35, Karen Hill wrote:
I have a pl/pgsql function that is inserting 200,000 records for
testing purposes. What is the expected time frame for this operation
on
On 2/6/07, Scott Marlowe [EMAIL PROTECTED] wrote:
On Tue, 2007-02-06 at 10:40, Merlin Moncure wrote:
On 2/6/07, Scott Marlowe [EMAIL PROTECTED] wrote:
On Mon, 2007-02-05 at 18:35, Karen Hill wrote:
I have a pl/pgsql function that is inserting 200,000 records for
testing purposes. What
On Tue, 2007-02-06 at 12:01 -0500, Merlin Moncure wrote:
On 2/6/07, Scott Marlowe [EMAIL PROTECTED] wrote:
On Tue, 2007-02-06 at 10:40, Merlin Moncure wrote:
On 2/6/07, Scott Marlowe [EMAIL PROTECTED] wrote:
On Mon, 2007-02-05 at 18:35, Karen Hill wrote:
I have a pl/pgsql function
On Tue, Feb 06, 2007 at 10:31:26 -0800,
Mark Lewis [EMAIL PROTECTED] wrote:
Sure it's possible:
CREATE TABLE parent (col1 int4);
-- insert many millions of rows into parent
CREATE TABLE child (col1 int4 REFERENCES parent(col1));
-- insert many millions of rows into child, very very
On 2/6/07, Mark Lewis [EMAIL PROTECTED] wrote:
actually, I get the stupid award also because RI check to unindexed
column is not possible :) (this haunts deletes, not inserts).
Sure it's possible:
CREATE TABLE parent (col1 int4);
-- insert many millions of rows into parent
CREATE TABLE
On Tue, 2007-02-06 at 14:06 -0500, Merlin Moncure wrote:
On 2/6/07, Mark Lewis [EMAIL PROTECTED] wrote:
actually, I get the stupid award also because RI check to unindexed
column is not possible :) (this haunts deletes, not inserts).
Sure it's possible:
CREATE TABLE parent (col1
@postgresql.org
Subject: Re: [PERFORM] How long should it take to insert 200,000
records?
On Tue, 2007-02-06 at 10:40, Merlin Moncure wrote:
On 2/6/07, Scott Marlowe [EMAIL PROTECTED] wrote:
On Mon, 2007-02-05 at 18:35, Karen Hill wrote:
I have a pl/pgsql function that is inserting 200,000 records
On Feb 5, 9:33 pm, [EMAIL PROTECTED] (Tom Lane) wrote:
Karen Hill [EMAIL PROTECTED] writes:
I have a pl/pgsql function that is inserting 200,000 records for
testing purposes. What is the expected time frame for this operation
on a pc with 1/2 a gig of ram and a 7200 RPM disk?
I think you
Karen Hill wrote:
The postgres version is 8.2.1 on Windows. The pl/pgsql function is
inserting to an updatable view (basically two tables).
CREATE TABLE foo1
(
) ;
CREATE TABLE foo2
(
);
CREATE VIEW viewfoo AS
(
);
CREATE RULE ruleFoo AS ON INSERT TO viewfoo DO INSTEAD
(
);
CREATE
Karen Hill [EMAIL PROTECTED] writes:
On Feb 5, 9:33 pm, [EMAIL PROTECTED] (Tom Lane) wrote:
I think you have omitted a bunch of relevant facts.
The postgres version is 8.2.1 on Windows. The pl/pgsql function is
inserting to an updatable view (basically two tables).
[ sketch of schema ]
I
I have a pl/pgsql function that is inserting 200,000 records for
testing purposes. What is the expected time frame for this operation
on a pc with 1/2 a gig of ram and a 7200 RPM disk? The processor is
a 2ghz cpu. So far I've been sitting here for about 2 million ms
waiting for it to complete,
Karen Hill [EMAIL PROTECTED] writes:
I have a pl/pgsql function that is inserting 200,000 records for
testing purposes. What is the expected time frame for this operation
on a pc with 1/2 a gig of ram and a 7200 RPM disk?
I think you have omitted a bunch of relevant facts. Bare INSERT is
14 matches
Mail list logo