good point. But when you use a LIMIT in a SELECT statement you WANT n RANDOM
tuples - its wrong to get RANDOM tuples ? So, in the same logic, its wrong
to exclude n random tuples ? Besides, if you want DELETE just 1 tuple, why
the executor have to scan the entire table, and not just stoping after find
the 1 tuple ? Why the LIMIT clause should be used to speedup only SELECT
statements ? if the programmer know the expected number of affected rows why
not use it to speed up DELETE/UPDATE ?

cheers,
--
Daniel Loureiro
http://diffcoder.blogspot.com/

2010/11/30 Jaime Casanova <ja...@2ndquadrant.com>

> On Mon, Nov 29, 2010 at 9:08 PM, Daniel Loureiro <loureir...@gmail.com>
> wrote:
> >
> > 3) change the executor to stop after “n” successful iterations. Is
> > this correct ?
> >
>
> no. it means you will delete the n first tuples that happen to be
> found, if you don't have a WHERE clause that means is very possible
> you delete something you don't want to... the correct solution is to
> use always try DELETE's inside transactions and only if you see the
> right thing happening issue a COMMIT
>
> besides i think this has been proposed and rejected before
>
> --
> Jaime Casanova         www.2ndQuadrant.com
> Professional PostgreSQL: Soporte y capacitación de PostgreSQL
>

Reply via email to