Hallo Ann,

thanks for you answer.


> use back versions of index pages for older transactions, which avoid
> the "whoops it's there but not in your view" problem at the cost of
> keeping old copies of index pages around.  Firebird avoids holding
> exclusive locks on sections of index when they are being split by
> allowing readers to move forward, but not backward, through an index.
> Yes, it might be possible to make an unqualified MAX faster by reading
> to the end, finding a qualified row (Hurray!) or if not, starting at
> the top and reading forward to the next lower value, and finding a
> qualifiying row or if not, starting again at the top and reading, and
> starting and reading and...  You see the problem.  Could be done,
> would be faster, is a lot of code, and encourages doing something
> that's better done with sequences/generators.

Still I don't understand the problem fully. min() runs very fast. What, if the 
fist (min-) value was deleted in some pending transaction. In this case, min() 
should have the same problems as max().

I think I'll give a descending index a try, since I have no sorting in my 
application. 

Are the following statements correct for a descending index which is referenced 
as foreign key in a second table:

create table a(
 id int not null,
 primary key (id) using desc index ai_pk
)

create table b(
 ref int not null
 )

alter table b add constraint bc_fk1 foreign key (ref) references a(id) on 
delete cascade using desc index bi_fk1


Has a descending index any performance drawbacks when inserting rows with 
ascending ids ?

Thanks !

Marc

P.S. By the way, that MVCC design seems to work out very nicely, since I 
noticed a lot of dead-lock problems with concurrent access to a derby database, 
which does not support it.

Reply via email to