> I think the reason you are seeing failures in the first function is
> that the initial DELETE is a no-op so it doesn't serialize anything,
> and then there is conflict when the two INSERTs proceed in parallel.

Here is a simple, reproducible example that delete doesn't cause it to use 
serial:
create table testserial(id serial primary key, val int);
create or replace function inserttest()returns int as
$$
begin
        delete from testserial;
        for i in 1..100000 loop
                insert into testserial(val) values(i);
        end Loop;
        return 1;
end;
$$ language 'plpgsql';

select inserttest();

Now there are 100,000 records in the table.

Run the function from 2 different sessions at the same time and you will see that there are 200,000 records in the table and not 100,000 records.

I also tested with an update statement:
create or replace function inserttest()returns int as
$$
begin
        update testserial set val=5 where val=1;
        delete from testserial;
        for i in 1..100000 loop
                insert into testserial(val) values(i);
        end Loop;
        return 1;
end;
$$ language 'plpgsql';

When this function is run twice at the same time, it actually does run in serial and there is only 100,000 records in the table.


Tom Lane wrote:

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
      choose an index scan if your joining column's datatypes do not
      match

Reply via email to