On Tue, 23 Mar 2004 19:50:24 -0500 (EST), Bruce Momjian
<[EMAIL PROTECTED]> wrote:
Naomi Walker wrote:
I'm not sure of the correct protocol for getting things on the "todo"
list. Whom shall we beg?
Uh, you just ask and we discuss it on the list.
Are you using INSERTs from pg_dump? I assume so because COPY uses a
single transaction per command. Right now with pg_dump -d I see:
--
-- Data for Name: has_oids; Type: TABLE DATA; Schema: public; Owner:
postgres
--
INSERT INTO has_oids VALUES (1);
INSERT INTO has_oids VALUES (1);
INSERT INTO has_oids VALUES (1);
INSERT INTO has_oids VALUES (1);
Seems that should be inside a BEGIN/COMMIT for performance reasons, and
to have the same behavior as COPY (fail if any row fails). Commands?
As far as skipping on errors, I am unsure on that one, and if we put the
INSERTs in a transaction, we will have no way of rolling back only the
few inserts that fail.
That is right but there are sutuation when you prefer at least some
data to be inserted and not all changes to be ralled back because
of errors.
---------------------------------------------------------------------------
>
>That brings up a good point. It would be extremely helpful to add two
>parameters to pg_dump. One, to add how many rows to insert before a
>commit, and two, to live through X number of errors before dying (and
>putting the "bad" rows in a file).
>
>
>At 10:15 AM 3/19/2004, Mark M. Huber wrote:
> >What it was that I guess the pg_dump makes one large transaction and
our
> >shell script wizard wrote a perl program to add a commit transaction
> >every 500 rows or what every you set. Also I should have said that
we were
> >doing the recovery with the insert statements created from pg_dump.
So...
> >my 500000 row table recovery took < 10 Min.
> >
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match