On Feb 5, 9:33 pm, [EMAIL PROTECTED] (Tom Lane) wrote:
Karen Hill [EMAIL PROTECTED] writes:
I have a pl/pgsql function that is inserting 200,000 records for
testing purposes. What is the expected time frame for this operation
on a pc with 1/2 a gig of ram and a 7200 RPM disk?
I think you have omitted a bunch of relevant facts. Bare INSERT is
reasonably quick:
regression=# create table foo (f1 int);
CREATE TABLE
regression=# \timing
Timing is on.
regression=# insert into foo select x from generate_series(1,20) x;
INSERT 0 20
Time: 5158.564 ms
regression=#
(this on a not-very-fast machine) but if you weigh it down with a ton
of index updates, foreign key checks, etc, it could get slow ...
also you haven't mentioned what else that plpgsql function is doing.
The postgres version is 8.2.1 on Windows. The pl/pgsql function is
inserting to an updatable view (basically two tables).
CREATE TABLE foo1
(
) ;
CREATE TABLE foo2
(
);
CREATE VIEW viewfoo AS
(
);
CREATE RULE ruleFoo AS ON INSERT TO viewfoo DO INSTEAD
(
);
CREATE OR REPLACE FUNCTION functionFoo() RETURNS VOID AS $$
BEGIN
FOR i in 1..20 LOOP
INSERT INTO viewfoo (x) VALUES (x);
END LOOP;
END;
$$ LANGUAGE plpgsql;
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate