Hi Surafel, IIUC, the requirement of the feature also had one of the consideration where one needs to delete large data and that takes long time, and adding LIMIT should reduce the overhead by allowing to delete the data in batches.
I did a quick performance test, and in following example you can see the conventional delete taking "355.288 ms" VS "1137.248 ms" with new LIMIT syntax. postgres=# create table mytab(a int, b varchar(50)); CREATE TABLE postgres=# insert into mytab(a, b) select i,md5(random()::text) from generate_series(1, 1000000) s(i); INSERT 0 1000000 postgres=# \timing Timing is on. postgres=# delete from mytab order by a limit 200000 offset 0; DELETE 200000 *Time: 1137.248 ms (00:01.137)* postgres=# truncate mytab; TRUNCATE TABLE Time: 21.717 ms postgres=# insert into mytab(a, b) select i,md5(random()::text) from generate_series(1, 1000000) s(i); INSERT 0 1000000 Time: 3166.445 ms (00:03.166) postgres=# delete from mytab where a < 200001; DELETE 200000 *Time: 355.288 ms* Am I missing something here? Regards, Jeevan Ladhe