> I think the reason you are seeing failures in the first function is
> that the initial DELETE is a no-op so it doesn't serialize anything,
> and then there is conflict when the two INSERTs proceed in parallel.
There is always data in the table that it deletes. So, according to that, with the first function
the second process should be running serially and for some reason it is running in parallel.
I just ran the function and there are now 61 rows in the table. I ran it again and there are still
61 rows in the table. I ran them both at the same time and:
ERROR: duplicate key violates unique constraint "reschedulepoparts1_pkey"
SQL state: 23505
Context: SQL statement "insert into reschedulepoparts(popartid,priority,rescqty) SELECT a.popartid,
a.priority, sum(b.rescqty) FROM reschedulepoparts_2 a JOIN reschedulepoparts_1 b ON a.popartid =
b.popartid AND a.priority = b.priority group by a.popartid, a.priority"
PL/pgSQL function "populaterescheduleparts" line 3 at SQL statement
My database is postgresql 8.0.4 running on gentoo. Is there anything else I can do to give you debug
information?
> [ eyeballs second function some more... ] Actually, the second
> function is guaranteed to be serialized by that initial
> "update systemsettings" --- I assume that's a one-row table? The second
> guy in will be unable to get past that until the first guy commits, and
> then he'll see the first guy's updates and there will be no error.
Now I understand why the second one does not cause an error, because of the update statement. (Yes,
it is a one record table). The question is why the first function is not running serially.
Tom Lane wrote:
Sim Zacks <[EMAIL PROTECTED]> writes:
This function, when run in 2 separate sessions at the same time,
causes a duplicate key error because popartid is a primary key.
...
The next one is a bit longer. I would expect that if it was run in 2
separate sessions that 1) it would generate the same error as the
first one generated,
I would expect that too, assuming that it actually inserts the same set
of rows into the table as the first one does (how sure are you of that?).
I wonder whether you are testing them both under identical conditions.
In particular, is the initial DELETE really doing anything or is the
table usually empty to start with anyway? If there is something to
delete then that ought to serialize the two sessions, leading to no
error (because the second guy in will wait to see if the first guy
commits his deletion).
[ eyeballs second function some more... ] Actually, the second
function is guaranteed to be serialized by that initial
"update systemsettings" --- I assume that's a one-row table? The second
guy in will be unable to get past that until the first guy commits, and
then he'll see the first guy's updates and there will be no error.
I think the reason you are seeing failures in the first function is
that the initial DELETE is a no-op so it doesn't serialize anything,
and then there is conflict when the two INSERTs proceed in parallel.
regards, tom lane
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster