I wrote:
 
> On the other hand, as a shop where we're probably going to set
> default_transaction_isolation = serializable in our
> postgresql.conf files and include trigger checks that we're
> running at that level, we can just boost those globally.  That may
> also work for others.
 
Just as a quick experiment I took Greg's example and tried it with
different costs, and thereby eliminated the false positives for this
particular example, all the way down to a 5 row table!:
 
set random_page_cost = 0.2;
set cpu_tuple_cost = 0.05;
drop table t;
create table t (id bigint, value bigint);
insert into t(id,value) (select s,1 from generate_series(1,5) as s);
create index t_idx on t(id);
begin transaction;
set transaction isolation level serializable;
select * from t where id = 2;
insert into t (id, value) values (-2, 1);

Execute this on the second client:

set random_page_cost = 0.2;
set cpu_tuple_cost = 0.05;
begin transaction;
set transaction isolation level serializable;
select * from t where id = 3;
insert into t (id, value) values (-3, 0);
commit;
 
Then go back to the first client and commit -- no problem.
 
I make no representation that these are great numbers for any
particular workload; it's just meant as a quick illustration that
these behaviors are tunable.  With serializable transactions, it
probably is reasonable to figure that the cost of a sequential scan
or of reading a tuple includes the cost of some percentage of
transactions being rolled back and restarted.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to